00:00:00.001 Started by upstream project "autotest-per-patch" build number 127079 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.092 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.094 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.154 Fetching changes from the remote Git repository 00:00:00.155 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.206 Using shallow fetch with depth 1 00:00:00.206 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.206 > git --version # timeout=10 00:00:00.254 > git --version # 'git version 2.39.2' 00:00:00.254 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.290 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.290 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.086 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.098 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.110 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:06.110 > git config core.sparsecheckout # timeout=10 00:00:06.122 > git read-tree -mu HEAD # timeout=10 00:00:06.138 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:06.170 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:06.171 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:06.270 [Pipeline] Start of Pipeline 00:00:06.281 [Pipeline] library 00:00:06.283 Loading library shm_lib@master 00:00:06.283 Library shm_lib@master is cached. Copying from home. 00:00:06.298 [Pipeline] node 00:00:06.336 Running on WFP5 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.337 [Pipeline] { 00:00:06.351 [Pipeline] catchError 00:00:06.352 [Pipeline] { 00:00:06.362 [Pipeline] wrap 00:00:06.370 [Pipeline] { 00:00:06.376 [Pipeline] stage 00:00:06.377 [Pipeline] { (Prologue) 00:00:06.535 [Pipeline] sh 00:00:06.814 + logger -p user.info -t JENKINS-CI 00:00:06.832 [Pipeline] echo 00:00:06.833 Node: WFP5 00:00:06.842 [Pipeline] sh 00:00:07.138 [Pipeline] setCustomBuildProperty 00:00:07.148 [Pipeline] echo 00:00:07.149 Cleanup processes 00:00:07.153 [Pipeline] sh 00:00:07.432 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.432 3120244 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.444 [Pipeline] sh 00:00:07.720 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.720 ++ grep -v 'sudo pgrep' 00:00:07.720 ++ awk '{print $1}' 00:00:07.720 + sudo kill -9 00:00:07.720 + true 00:00:07.732 [Pipeline] cleanWs 00:00:07.739 [WS-CLEANUP] Deleting project workspace... 00:00:07.739 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.745 [WS-CLEANUP] done 00:00:07.749 [Pipeline] setCustomBuildProperty 00:00:07.765 [Pipeline] sh 00:00:08.045 + sudo git config --global --replace-all safe.directory '*' 00:00:08.113 [Pipeline] httpRequest 00:00:08.140 [Pipeline] echo 00:00:08.142 Sorcerer 10.211.164.101 is alive 00:00:08.148 [Pipeline] httpRequest 00:00:08.153 HttpMethod: GET 00:00:08.153 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.154 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:08.177 Response Code: HTTP/1.1 200 OK 00:00:08.178 Success: Status code 200 is in the accepted range: 200,404 00:00:08.178 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:19.854 [Pipeline] sh 00:00:20.134 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:20.150 [Pipeline] httpRequest 00:00:20.178 [Pipeline] echo 00:00:20.181 Sorcerer 10.211.164.101 is alive 00:00:20.190 [Pipeline] httpRequest 00:00:20.194 HttpMethod: GET 00:00:20.195 URL: http://10.211.164.101/packages/spdk_ac4b3e123d6706f24a70a5b70fe720ab714653f9.tar.gz 00:00:20.195 Sending request to url: http://10.211.164.101/packages/spdk_ac4b3e123d6706f24a70a5b70fe720ab714653f9.tar.gz 00:00:20.217 Response Code: HTTP/1.1 200 OK 00:00:20.218 Success: Status code 200 is in the accepted range: 200,404 00:00:20.218 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ac4b3e123d6706f24a70a5b70fe720ab714653f9.tar.gz 00:01:32.782 [Pipeline] sh 00:01:33.066 + tar --no-same-owner -xf spdk_ac4b3e123d6706f24a70a5b70fe720ab714653f9.tar.gz 00:01:35.610 [Pipeline] sh 00:01:35.892 + git -C spdk log --oneline -n5 00:01:35.892 ac4b3e123 raid: clear base bdev configure_cb after executing 00:01:35.892 ee43290d1 raid: complete bdev_raid_create after sb is written 00:01:35.892 8711e7e9b autotest: reduce accel tests runs with SPDK_TEST_ACCEL flag 00:01:35.892 50222f810 configure: don't exit on non Intel platforms 00:01:35.892 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:01:35.904 [Pipeline] } 00:01:35.922 [Pipeline] // stage 00:01:35.933 [Pipeline] stage 00:01:35.935 [Pipeline] { (Prepare) 00:01:35.954 [Pipeline] writeFile 00:01:35.972 [Pipeline] sh 00:01:36.254 + logger -p user.info -t JENKINS-CI 00:01:36.267 [Pipeline] sh 00:01:36.549 + logger -p user.info -t JENKINS-CI 00:01:36.562 [Pipeline] sh 00:01:36.844 + cat autorun-spdk.conf 00:01:36.844 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:36.844 SPDK_TEST_NVMF=1 00:01:36.844 SPDK_TEST_NVME_CLI=1 00:01:36.844 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:36.844 SPDK_TEST_NVMF_NICS=e810 00:01:36.844 SPDK_TEST_VFIOUSER=1 00:01:36.844 SPDK_RUN_UBSAN=1 00:01:36.844 NET_TYPE=phy 00:01:36.851 RUN_NIGHTLY=0 00:01:36.859 [Pipeline] readFile 00:01:36.887 [Pipeline] withEnv 00:01:36.890 [Pipeline] { 00:01:36.909 [Pipeline] sh 00:01:37.196 + set -ex 00:01:37.196 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:37.196 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:37.196 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:37.196 ++ SPDK_TEST_NVMF=1 00:01:37.196 ++ SPDK_TEST_NVME_CLI=1 00:01:37.196 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:37.196 ++ SPDK_TEST_NVMF_NICS=e810 00:01:37.196 ++ SPDK_TEST_VFIOUSER=1 00:01:37.196 ++ SPDK_RUN_UBSAN=1 00:01:37.196 ++ NET_TYPE=phy 00:01:37.196 ++ RUN_NIGHTLY=0 00:01:37.196 + case $SPDK_TEST_NVMF_NICS in 00:01:37.196 + DRIVERS=ice 00:01:37.196 + [[ tcp == \r\d\m\a ]] 00:01:37.196 + [[ -n ice ]] 00:01:37.196 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:37.196 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:43.764 rmmod: ERROR: Module irdma is not currently loaded 00:01:43.764 rmmod: ERROR: Module i40iw is not currently loaded 00:01:43.764 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:43.764 + true 00:01:43.764 + for D in $DRIVERS 00:01:43.764 + sudo modprobe ice 00:01:43.764 + exit 0 00:01:43.774 [Pipeline] } 00:01:43.795 [Pipeline] // withEnv 00:01:43.802 [Pipeline] } 00:01:43.822 [Pipeline] // stage 00:01:43.836 [Pipeline] catchError 00:01:43.837 [Pipeline] { 00:01:43.856 [Pipeline] timeout 00:01:43.856 Timeout set to expire in 50 min 00:01:43.858 [Pipeline] { 00:01:43.873 [Pipeline] stage 00:01:43.875 [Pipeline] { (Tests) 00:01:43.888 [Pipeline] sh 00:01:44.167 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.167 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.167 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.167 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:44.167 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.167 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.167 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:44.167 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.167 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.167 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.167 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:44.167 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.167 + source /etc/os-release 00:01:44.167 ++ NAME='Fedora Linux' 00:01:44.167 ++ VERSION='38 (Cloud Edition)' 00:01:44.167 ++ ID=fedora 00:01:44.167 ++ VERSION_ID=38 00:01:44.167 ++ VERSION_CODENAME= 00:01:44.167 ++ PLATFORM_ID=platform:f38 00:01:44.167 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:44.167 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.167 ++ LOGO=fedora-logo-icon 00:01:44.167 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:44.167 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.167 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:44.167 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.167 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.167 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.167 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:44.167 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.167 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:44.167 ++ SUPPORT_END=2024-05-14 00:01:44.167 ++ VARIANT='Cloud Edition' 00:01:44.167 ++ VARIANT_ID=cloud 00:01:44.167 + uname -a 00:01:44.167 Linux spdk-wfp-05 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:44.167 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:46.069 Hugepages 00:01:46.069 node hugesize free / total 00:01:46.069 node0 1048576kB 0 / 0 00:01:46.069 node0 2048kB 0 / 0 00:01:46.069 node1 1048576kB 0 / 0 00:01:46.069 node1 2048kB 0 / 0 00:01:46.069 00:01:46.069 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:46.069 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:46.069 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:46.069 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:46.069 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:46.069 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:46.069 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:46.070 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:46.070 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:46.329 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:46.329 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:46.329 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:46.329 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:46.329 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:46.329 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:46.329 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:46.329 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:46.329 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:46.329 + rm -f /tmp/spdk-ld-path 00:01:46.329 + source autorun-spdk.conf 00:01:46.329 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.329 ++ SPDK_TEST_NVMF=1 00:01:46.329 ++ SPDK_TEST_NVME_CLI=1 00:01:46.329 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.329 ++ SPDK_TEST_NVMF_NICS=e810 00:01:46.329 ++ SPDK_TEST_VFIOUSER=1 00:01:46.329 ++ SPDK_RUN_UBSAN=1 00:01:46.329 ++ NET_TYPE=phy 00:01:46.329 ++ RUN_NIGHTLY=0 00:01:46.329 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:46.329 + [[ -n '' ]] 00:01:46.329 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:46.329 + for M in /var/spdk/build-*-manifest.txt 00:01:46.329 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:46.329 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.329 + for M in /var/spdk/build-*-manifest.txt 00:01:46.329 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:46.329 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:46.329 ++ uname 00:01:46.329 + [[ Linux == \L\i\n\u\x ]] 00:01:46.329 + sudo dmesg -T 00:01:46.329 + sudo dmesg --clear 00:01:46.329 + dmesg_pid=3121187 00:01:46.329 + [[ Fedora Linux == FreeBSD ]] 00:01:46.329 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.329 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:46.329 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:46.329 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:46.329 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:46.329 + [[ -x /usr/src/fio-static/fio ]] 00:01:46.329 + export FIO_BIN=/usr/src/fio-static/fio 00:01:46.329 + FIO_BIN=/usr/src/fio-static/fio 00:01:46.329 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:46.329 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:46.329 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:46.329 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.329 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:46.329 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:46.329 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.329 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:46.329 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:46.329 + sudo dmesg -Tw 00:01:46.329 Test configuration: 00:01:46.329 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.329 SPDK_TEST_NVMF=1 00:01:46.329 SPDK_TEST_NVME_CLI=1 00:01:46.329 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:46.329 SPDK_TEST_NVMF_NICS=e810 00:01:46.329 SPDK_TEST_VFIOUSER=1 00:01:46.329 SPDK_RUN_UBSAN=1 00:01:46.329 NET_TYPE=phy 00:01:46.635 RUN_NIGHTLY=0 17:56:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:46.635 17:56:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:46.635 17:56:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:46.635 17:56:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:46.635 17:56:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.635 17:56:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.635 17:56:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.635 17:56:39 -- paths/export.sh@5 -- $ export PATH 00:01:46.635 17:56:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:46.635 17:56:39 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:46.635 17:56:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:46.635 17:56:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721836599.XXXXXX 00:01:46.635 17:56:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721836599.ToQbtQ 00:01:46.635 17:56:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:46.635 17:56:39 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:46.635 17:56:39 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:46.635 17:56:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:46.635 17:56:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:46.636 17:56:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:46.636 17:56:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:46.636 17:56:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.636 17:56:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:46.636 17:56:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:46.636 17:56:39 -- pm/common@17 -- $ local monitor 00:01:46.636 17:56:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.636 17:56:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.636 17:56:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.636 17:56:39 -- pm/common@21 -- $ date +%s 00:01:46.636 17:56:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:46.636 17:56:39 -- pm/common@21 -- $ date +%s 00:01:46.636 17:56:39 -- pm/common@25 -- $ sleep 1 00:01:46.636 17:56:39 -- pm/common@21 -- $ date +%s 00:01:46.636 17:56:39 -- pm/common@21 -- $ date +%s 00:01:46.636 17:56:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721836599 00:01:46.636 17:56:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721836599 00:01:46.636 17:56:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721836599 00:01:46.636 17:56:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721836599 00:01:46.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721836599_collect-cpu-temp.pm.log 00:01:46.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721836599_collect-vmstat.pm.log 00:01:46.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721836599_collect-cpu-load.pm.log 00:01:46.636 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721836599_collect-bmc-pm.bmc.pm.log 00:01:47.574 17:56:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:47.574 17:56:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:47.574 17:56:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:47.574 17:56:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.574 17:56:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:47.574 Wed Jul 24 03:56:40 PM UTC 2024 00:01:47.574 17:56:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:47.574 v24.09-pre-313-gac4b3e123 00:01:47.574 17:56:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:47.574 17:56:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:47.574 17:56:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:47.574 17:56:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:47.574 17:56:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:47.574 17:56:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.574 ************************************ 00:01:47.574 START TEST ubsan 00:01:47.574 ************************************ 00:01:47.574 17:56:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:47.574 using ubsan 00:01:47.574 00:01:47.574 real 0m0.000s 00:01:47.574 user 0m0.000s 00:01:47.574 sys 0m0.000s 00:01:47.574 17:56:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:47.574 17:56:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:47.574 ************************************ 00:01:47.574 END TEST ubsan 00:01:47.574 ************************************ 00:01:47.574 17:56:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:47.574 17:56:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:47.574 17:56:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:47.574 17:56:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:47.574 17:56:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:47.574 17:56:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:47.574 17:56:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:47.574 17:56:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:47.574 17:56:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:47.833 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:47.833 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:48.092 Using 'verbs' RDMA provider 00:02:01.228 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:11.204 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:11.204 Creating mk/config.mk...done. 00:02:11.204 Creating mk/cc.flags.mk...done. 00:02:11.204 Type 'make' to build. 00:02:11.204 17:57:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:11.204 17:57:03 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:11.204 17:57:03 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:11.204 17:57:03 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.204 ************************************ 00:02:11.204 START TEST make 00:02:11.204 ************************************ 00:02:11.204 17:57:04 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:11.462 make[1]: Nothing to be done for 'all'. 00:02:12.844 The Meson build system 00:02:12.844 Version: 1.3.1 00:02:12.844 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:12.844 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:12.844 Build type: native build 00:02:12.844 Project name: libvfio-user 00:02:12.844 Project version: 0.0.1 00:02:12.844 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:12.844 C linker for the host machine: cc ld.bfd 2.39-16 00:02:12.844 Host machine cpu family: x86_64 00:02:12.844 Host machine cpu: x86_64 00:02:12.844 Run-time dependency threads found: YES 00:02:12.844 Library dl found: YES 00:02:12.844 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:12.844 Run-time dependency json-c found: YES 0.17 00:02:12.844 Run-time dependency cmocka found: YES 1.1.7 00:02:12.844 Program pytest-3 found: NO 00:02:12.844 Program flake8 found: NO 00:02:12.844 Program misspell-fixer found: NO 00:02:12.844 Program restructuredtext-lint found: NO 00:02:12.844 Program valgrind found: YES (/usr/bin/valgrind) 00:02:12.844 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.844 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.844 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.844 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:12.844 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:12.844 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:12.844 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:12.844 Build targets in project: 8 00:02:12.844 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:12.844 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:12.844 00:02:12.844 libvfio-user 0.0.1 00:02:12.844 00:02:12.844 User defined options 00:02:12.844 buildtype : debug 00:02:12.844 default_library: shared 00:02:12.844 libdir : /usr/local/lib 00:02:12.844 00:02:12.844 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.102 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:13.359 [1/37] Compiling C object samples/null.p/null.c.o 00:02:13.359 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:13.359 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:13.359 [4/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:13.359 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:13.359 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:13.359 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:13.359 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:13.359 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:13.359 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:13.359 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:13.359 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:13.359 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:13.359 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:13.359 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:13.359 [16/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:13.359 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:13.359 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:13.359 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:13.359 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:13.359 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:13.359 [22/37] Compiling C object samples/server.p/server.c.o 00:02:13.359 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:13.359 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:13.359 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:13.359 [26/37] Compiling C object samples/client.p/client.c.o 00:02:13.359 [27/37] Linking target samples/client 00:02:13.617 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:13.617 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:13.617 [30/37] Linking target test/unit_tests 00:02:13.617 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:13.617 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:13.617 [33/37] Linking target samples/lspci 00:02:13.617 [34/37] Linking target samples/gpio-pci-idio-16 00:02:13.617 [35/37] Linking target samples/null 00:02:13.617 [36/37] Linking target samples/server 00:02:13.617 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:13.617 INFO: autodetecting backend as ninja 00:02:13.617 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:13.875 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:14.134 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:14.134 ninja: no work to do. 00:02:19.405 The Meson build system 00:02:19.405 Version: 1.3.1 00:02:19.405 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:19.405 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:19.405 Build type: native build 00:02:19.405 Program cat found: YES (/usr/bin/cat) 00:02:19.405 Project name: DPDK 00:02:19.405 Project version: 24.03.0 00:02:19.405 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:19.405 C linker for the host machine: cc ld.bfd 2.39-16 00:02:19.405 Host machine cpu family: x86_64 00:02:19.405 Host machine cpu: x86_64 00:02:19.405 Message: ## Building in Developer Mode ## 00:02:19.405 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.405 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:19.405 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.405 Program python3 found: YES (/usr/bin/python3) 00:02:19.405 Program cat found: YES (/usr/bin/cat) 00:02:19.405 Compiler for C supports arguments -march=native: YES 00:02:19.405 Checking for size of "void *" : 8 00:02:19.405 Checking for size of "void *" : 8 (cached) 00:02:19.405 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:19.405 Library m found: YES 00:02:19.405 Library numa found: YES 00:02:19.405 Has header "numaif.h" : YES 00:02:19.405 Library fdt found: NO 00:02:19.405 Library execinfo found: NO 00:02:19.405 Has header "execinfo.h" : YES 00:02:19.405 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:19.405 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.405 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.405 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.405 Run-time dependency openssl found: YES 3.0.9 00:02:19.405 Run-time dependency libpcap found: YES 1.10.4 00:02:19.405 Has header "pcap.h" with dependency libpcap: YES 00:02:19.405 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.405 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.405 Compiler for C supports arguments -Wformat: YES 00:02:19.405 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.405 Compiler for C supports arguments -Wformat-security: NO 00:02:19.405 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.405 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.405 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.405 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.405 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.405 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.405 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.405 Compiler for C supports arguments -Wundef: YES 00:02:19.405 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.405 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.405 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.405 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.405 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.405 Program objdump found: YES (/usr/bin/objdump) 00:02:19.405 Compiler for C supports arguments -mavx512f: YES 00:02:19.405 Checking if "AVX512 checking" compiles: YES 00:02:19.405 Fetching value of define "__SSE4_2__" : 1 00:02:19.405 Fetching value of define "__AES__" : 1 00:02:19.406 Fetching value of define "__AVX__" : 1 00:02:19.406 Fetching value of define "__AVX2__" : 1 00:02:19.406 Fetching value of define "__AVX512BW__" : 1 00:02:19.406 Fetching value of define "__AVX512CD__" : 1 00:02:19.406 Fetching value of define "__AVX512DQ__" : 1 00:02:19.406 Fetching value of define "__AVX512F__" : 1 00:02:19.406 Fetching value of define "__AVX512VL__" : 1 00:02:19.406 Fetching value of define "__PCLMUL__" : 1 00:02:19.406 Fetching value of define "__RDRND__" : 1 00:02:19.406 Fetching value of define "__RDSEED__" : 1 00:02:19.406 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.406 Fetching value of define "__znver1__" : (undefined) 00:02:19.406 Fetching value of define "__znver2__" : (undefined) 00:02:19.406 Fetching value of define "__znver3__" : (undefined) 00:02:19.406 Fetching value of define "__znver4__" : (undefined) 00:02:19.406 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.406 Message: lib/log: Defining dependency "log" 00:02:19.406 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.406 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.406 Checking for function "getentropy" : NO 00:02:19.406 Message: lib/eal: Defining dependency "eal" 00:02:19.406 Message: lib/ring: Defining dependency "ring" 00:02:19.406 Message: lib/rcu: Defining dependency "rcu" 00:02:19.406 Message: lib/mempool: Defining dependency "mempool" 00:02:19.406 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.406 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.406 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:19.406 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:19.406 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:19.406 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:19.406 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:19.406 Compiler for C supports arguments -mpclmul: YES 00:02:19.406 Compiler for C supports arguments -maes: YES 00:02:19.406 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.406 Compiler for C supports arguments -mavx512bw: YES 00:02:19.406 Compiler for C supports arguments -mavx512dq: YES 00:02:19.406 Compiler for C supports arguments -mavx512vl: YES 00:02:19.406 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.406 Compiler for C supports arguments -mavx2: YES 00:02:19.406 Compiler for C supports arguments -mavx: YES 00:02:19.406 Message: lib/net: Defining dependency "net" 00:02:19.406 Message: lib/meter: Defining dependency "meter" 00:02:19.406 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.406 Message: lib/pci: Defining dependency "pci" 00:02:19.406 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.406 Message: lib/hash: Defining dependency "hash" 00:02:19.406 Message: lib/timer: Defining dependency "timer" 00:02:19.406 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.406 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.406 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.406 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.406 Message: lib/power: Defining dependency "power" 00:02:19.406 Message: lib/reorder: Defining dependency "reorder" 00:02:19.406 Message: lib/security: Defining dependency "security" 00:02:19.406 Has header "linux/userfaultfd.h" : YES 00:02:19.406 Has header "linux/vduse.h" : YES 00:02:19.406 Message: lib/vhost: Defining dependency "vhost" 00:02:19.406 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:19.406 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:19.406 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:19.406 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:19.406 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:19.406 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:19.406 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:19.406 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:19.406 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:19.406 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:19.406 Program doxygen found: YES (/usr/bin/doxygen) 00:02:19.406 Configuring doxy-api-html.conf using configuration 00:02:19.406 Configuring doxy-api-man.conf using configuration 00:02:19.406 Program mandb found: YES (/usr/bin/mandb) 00:02:19.406 Program sphinx-build found: NO 00:02:19.406 Configuring rte_build_config.h using configuration 00:02:19.406 Message: 00:02:19.406 ================= 00:02:19.406 Applications Enabled 00:02:19.406 ================= 00:02:19.406 00:02:19.406 apps: 00:02:19.406 00:02:19.406 00:02:19.406 Message: 00:02:19.406 ================= 00:02:19.406 Libraries Enabled 00:02:19.406 ================= 00:02:19.406 00:02:19.406 libs: 00:02:19.406 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:19.406 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:19.406 cryptodev, dmadev, power, reorder, security, vhost, 00:02:19.406 00:02:19.406 Message: 00:02:19.406 =============== 00:02:19.406 Drivers Enabled 00:02:19.406 =============== 00:02:19.406 00:02:19.406 common: 00:02:19.406 00:02:19.406 bus: 00:02:19.406 pci, vdev, 00:02:19.406 mempool: 00:02:19.406 ring, 00:02:19.406 dma: 00:02:19.406 00:02:19.406 net: 00:02:19.406 00:02:19.406 crypto: 00:02:19.406 00:02:19.406 compress: 00:02:19.406 00:02:19.406 vdpa: 00:02:19.406 00:02:19.406 00:02:19.406 Message: 00:02:19.406 ================= 00:02:19.406 Content Skipped 00:02:19.406 ================= 00:02:19.406 00:02:19.406 apps: 00:02:19.406 dumpcap: explicitly disabled via build config 00:02:19.406 graph: explicitly disabled via build config 00:02:19.406 pdump: explicitly disabled via build config 00:02:19.406 proc-info: explicitly disabled via build config 00:02:19.406 test-acl: explicitly disabled via build config 00:02:19.406 test-bbdev: explicitly disabled via build config 00:02:19.406 test-cmdline: explicitly disabled via build config 00:02:19.406 test-compress-perf: explicitly disabled via build config 00:02:19.406 test-crypto-perf: explicitly disabled via build config 00:02:19.406 test-dma-perf: explicitly disabled via build config 00:02:19.406 test-eventdev: explicitly disabled via build config 00:02:19.406 test-fib: explicitly disabled via build config 00:02:19.406 test-flow-perf: explicitly disabled via build config 00:02:19.406 test-gpudev: explicitly disabled via build config 00:02:19.406 test-mldev: explicitly disabled via build config 00:02:19.406 test-pipeline: explicitly disabled via build config 00:02:19.406 test-pmd: explicitly disabled via build config 00:02:19.406 test-regex: explicitly disabled via build config 00:02:19.406 test-sad: explicitly disabled via build config 00:02:19.406 test-security-perf: explicitly disabled via build config 00:02:19.406 00:02:19.406 libs: 00:02:19.406 argparse: explicitly disabled via build config 00:02:19.406 metrics: explicitly disabled via build config 00:02:19.406 acl: explicitly disabled via build config 00:02:19.406 bbdev: explicitly disabled via build config 00:02:19.406 bitratestats: explicitly disabled via build config 00:02:19.406 bpf: explicitly disabled via build config 00:02:19.406 cfgfile: explicitly disabled via build config 00:02:19.406 distributor: explicitly disabled via build config 00:02:19.406 efd: explicitly disabled via build config 00:02:19.406 eventdev: explicitly disabled via build config 00:02:19.406 dispatcher: explicitly disabled via build config 00:02:19.406 gpudev: explicitly disabled via build config 00:02:19.406 gro: explicitly disabled via build config 00:02:19.406 gso: explicitly disabled via build config 00:02:19.406 ip_frag: explicitly disabled via build config 00:02:19.406 jobstats: explicitly disabled via build config 00:02:19.406 latencystats: explicitly disabled via build config 00:02:19.406 lpm: explicitly disabled via build config 00:02:19.406 member: explicitly disabled via build config 00:02:19.406 pcapng: explicitly disabled via build config 00:02:19.406 rawdev: explicitly disabled via build config 00:02:19.406 regexdev: explicitly disabled via build config 00:02:19.406 mldev: explicitly disabled via build config 00:02:19.406 rib: explicitly disabled via build config 00:02:19.406 sched: explicitly disabled via build config 00:02:19.406 stack: explicitly disabled via build config 00:02:19.406 ipsec: explicitly disabled via build config 00:02:19.406 pdcp: explicitly disabled via build config 00:02:19.406 fib: explicitly disabled via build config 00:02:19.406 port: explicitly disabled via build config 00:02:19.406 pdump: explicitly disabled via build config 00:02:19.406 table: explicitly disabled via build config 00:02:19.406 pipeline: explicitly disabled via build config 00:02:19.406 graph: explicitly disabled via build config 00:02:19.406 node: explicitly disabled via build config 00:02:19.406 00:02:19.406 drivers: 00:02:19.406 common/cpt: not in enabled drivers build config 00:02:19.406 common/dpaax: not in enabled drivers build config 00:02:19.406 common/iavf: not in enabled drivers build config 00:02:19.406 common/idpf: not in enabled drivers build config 00:02:19.406 common/ionic: not in enabled drivers build config 00:02:19.406 common/mvep: not in enabled drivers build config 00:02:19.406 common/octeontx: not in enabled drivers build config 00:02:19.406 bus/auxiliary: not in enabled drivers build config 00:02:19.406 bus/cdx: not in enabled drivers build config 00:02:19.406 bus/dpaa: not in enabled drivers build config 00:02:19.406 bus/fslmc: not in enabled drivers build config 00:02:19.406 bus/ifpga: not in enabled drivers build config 00:02:19.406 bus/platform: not in enabled drivers build config 00:02:19.406 bus/uacce: not in enabled drivers build config 00:02:19.406 bus/vmbus: not in enabled drivers build config 00:02:19.406 common/cnxk: not in enabled drivers build config 00:02:19.406 common/mlx5: not in enabled drivers build config 00:02:19.406 common/nfp: not in enabled drivers build config 00:02:19.406 common/nitrox: not in enabled drivers build config 00:02:19.406 common/qat: not in enabled drivers build config 00:02:19.406 common/sfc_efx: not in enabled drivers build config 00:02:19.406 mempool/bucket: not in enabled drivers build config 00:02:19.407 mempool/cnxk: not in enabled drivers build config 00:02:19.407 mempool/dpaa: not in enabled drivers build config 00:02:19.407 mempool/dpaa2: not in enabled drivers build config 00:02:19.407 mempool/octeontx: not in enabled drivers build config 00:02:19.407 mempool/stack: not in enabled drivers build config 00:02:19.407 dma/cnxk: not in enabled drivers build config 00:02:19.407 dma/dpaa: not in enabled drivers build config 00:02:19.407 dma/dpaa2: not in enabled drivers build config 00:02:19.407 dma/hisilicon: not in enabled drivers build config 00:02:19.407 dma/idxd: not in enabled drivers build config 00:02:19.407 dma/ioat: not in enabled drivers build config 00:02:19.407 dma/skeleton: not in enabled drivers build config 00:02:19.407 net/af_packet: not in enabled drivers build config 00:02:19.407 net/af_xdp: not in enabled drivers build config 00:02:19.407 net/ark: not in enabled drivers build config 00:02:19.407 net/atlantic: not in enabled drivers build config 00:02:19.407 net/avp: not in enabled drivers build config 00:02:19.407 net/axgbe: not in enabled drivers build config 00:02:19.407 net/bnx2x: not in enabled drivers build config 00:02:19.407 net/bnxt: not in enabled drivers build config 00:02:19.407 net/bonding: not in enabled drivers build config 00:02:19.407 net/cnxk: not in enabled drivers build config 00:02:19.407 net/cpfl: not in enabled drivers build config 00:02:19.407 net/cxgbe: not in enabled drivers build config 00:02:19.407 net/dpaa: not in enabled drivers build config 00:02:19.407 net/dpaa2: not in enabled drivers build config 00:02:19.407 net/e1000: not in enabled drivers build config 00:02:19.407 net/ena: not in enabled drivers build config 00:02:19.407 net/enetc: not in enabled drivers build config 00:02:19.407 net/enetfec: not in enabled drivers build config 00:02:19.407 net/enic: not in enabled drivers build config 00:02:19.407 net/failsafe: not in enabled drivers build config 00:02:19.407 net/fm10k: not in enabled drivers build config 00:02:19.407 net/gve: not in enabled drivers build config 00:02:19.407 net/hinic: not in enabled drivers build config 00:02:19.407 net/hns3: not in enabled drivers build config 00:02:19.407 net/i40e: not in enabled drivers build config 00:02:19.407 net/iavf: not in enabled drivers build config 00:02:19.407 net/ice: not in enabled drivers build config 00:02:19.407 net/idpf: not in enabled drivers build config 00:02:19.407 net/igc: not in enabled drivers build config 00:02:19.407 net/ionic: not in enabled drivers build config 00:02:19.407 net/ipn3ke: not in enabled drivers build config 00:02:19.407 net/ixgbe: not in enabled drivers build config 00:02:19.407 net/mana: not in enabled drivers build config 00:02:19.407 net/memif: not in enabled drivers build config 00:02:19.407 net/mlx4: not in enabled drivers build config 00:02:19.407 net/mlx5: not in enabled drivers build config 00:02:19.407 net/mvneta: not in enabled drivers build config 00:02:19.407 net/mvpp2: not in enabled drivers build config 00:02:19.407 net/netvsc: not in enabled drivers build config 00:02:19.407 net/nfb: not in enabled drivers build config 00:02:19.407 net/nfp: not in enabled drivers build config 00:02:19.407 net/ngbe: not in enabled drivers build config 00:02:19.407 net/null: not in enabled drivers build config 00:02:19.407 net/octeontx: not in enabled drivers build config 00:02:19.407 net/octeon_ep: not in enabled drivers build config 00:02:19.407 net/pcap: not in enabled drivers build config 00:02:19.407 net/pfe: not in enabled drivers build config 00:02:19.407 net/qede: not in enabled drivers build config 00:02:19.407 net/ring: not in enabled drivers build config 00:02:19.407 net/sfc: not in enabled drivers build config 00:02:19.407 net/softnic: not in enabled drivers build config 00:02:19.407 net/tap: not in enabled drivers build config 00:02:19.407 net/thunderx: not in enabled drivers build config 00:02:19.407 net/txgbe: not in enabled drivers build config 00:02:19.407 net/vdev_netvsc: not in enabled drivers build config 00:02:19.407 net/vhost: not in enabled drivers build config 00:02:19.407 net/virtio: not in enabled drivers build config 00:02:19.407 net/vmxnet3: not in enabled drivers build config 00:02:19.407 raw/*: missing internal dependency, "rawdev" 00:02:19.407 crypto/armv8: not in enabled drivers build config 00:02:19.407 crypto/bcmfs: not in enabled drivers build config 00:02:19.407 crypto/caam_jr: not in enabled drivers build config 00:02:19.407 crypto/ccp: not in enabled drivers build config 00:02:19.407 crypto/cnxk: not in enabled drivers build config 00:02:19.407 crypto/dpaa_sec: not in enabled drivers build config 00:02:19.407 crypto/dpaa2_sec: not in enabled drivers build config 00:02:19.407 crypto/ipsec_mb: not in enabled drivers build config 00:02:19.407 crypto/mlx5: not in enabled drivers build config 00:02:19.407 crypto/mvsam: not in enabled drivers build config 00:02:19.407 crypto/nitrox: not in enabled drivers build config 00:02:19.407 crypto/null: not in enabled drivers build config 00:02:19.407 crypto/octeontx: not in enabled drivers build config 00:02:19.407 crypto/openssl: not in enabled drivers build config 00:02:19.407 crypto/scheduler: not in enabled drivers build config 00:02:19.407 crypto/uadk: not in enabled drivers build config 00:02:19.407 crypto/virtio: not in enabled drivers build config 00:02:19.407 compress/isal: not in enabled drivers build config 00:02:19.407 compress/mlx5: not in enabled drivers build config 00:02:19.407 compress/nitrox: not in enabled drivers build config 00:02:19.407 compress/octeontx: not in enabled drivers build config 00:02:19.407 compress/zlib: not in enabled drivers build config 00:02:19.407 regex/*: missing internal dependency, "regexdev" 00:02:19.407 ml/*: missing internal dependency, "mldev" 00:02:19.407 vdpa/ifc: not in enabled drivers build config 00:02:19.407 vdpa/mlx5: not in enabled drivers build config 00:02:19.407 vdpa/nfp: not in enabled drivers build config 00:02:19.407 vdpa/sfc: not in enabled drivers build config 00:02:19.407 event/*: missing internal dependency, "eventdev" 00:02:19.407 baseband/*: missing internal dependency, "bbdev" 00:02:19.407 gpu/*: missing internal dependency, "gpudev" 00:02:19.407 00:02:19.407 00:02:19.407 Build targets in project: 85 00:02:19.407 00:02:19.407 DPDK 24.03.0 00:02:19.407 00:02:19.407 User defined options 00:02:19.407 buildtype : debug 00:02:19.407 default_library : shared 00:02:19.407 libdir : lib 00:02:19.407 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:19.407 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:19.407 c_link_args : 00:02:19.407 cpu_instruction_set: native 00:02:19.407 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:19.407 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:19.407 enable_docs : false 00:02:19.407 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:19.407 enable_kmods : false 00:02:19.407 max_lcores : 128 00:02:19.407 tests : false 00:02:19.407 00:02:19.407 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:19.666 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:19.928 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.928 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:19.928 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.928 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:19.928 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:19.928 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.928 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:19.928 [8/268] Linking static target lib/librte_kvargs.a 00:02:19.928 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.928 [10/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:19.928 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:19.928 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:19.928 [13/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:19.928 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:19.928 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:19.928 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:19.928 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:19.928 [18/268] Linking static target lib/librte_log.a 00:02:19.928 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:20.189 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:20.189 [21/268] Linking static target lib/librte_pci.a 00:02:20.189 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:20.189 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:20.189 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:20.189 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:20.189 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:20.448 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:20.448 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.448 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:20.448 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:20.448 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:20.448 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:20.448 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:20.448 [34/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:20.448 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:20.448 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:20.448 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:20.448 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.448 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:20.448 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:20.448 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.448 [42/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:20.448 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:20.448 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:20.448 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:20.448 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.448 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:20.448 [48/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:20.448 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:20.448 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:20.448 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.448 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.448 [53/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.448 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.448 [55/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:20.448 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:20.448 [57/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.448 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.448 [59/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:20.448 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:20.448 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:20.448 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:20.448 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.448 [64/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:20.448 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:20.448 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:20.448 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.448 [68/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:20.448 [69/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:20.448 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:20.448 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:20.448 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:20.448 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.448 [74/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:20.448 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.448 [76/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:20.448 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:20.448 [78/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:20.448 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:20.448 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:20.448 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:20.448 [82/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.448 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:20.448 [84/268] Linking static target lib/librte_meter.a 00:02:20.448 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:20.448 [86/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:20.448 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:20.449 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:20.449 [89/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:20.449 [90/268] Linking static target lib/librte_telemetry.a 00:02:20.449 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:20.449 [92/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:20.449 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:20.449 [94/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.449 [95/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:20.449 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:20.449 [97/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:20.449 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:20.449 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:20.449 [100/268] Linking static target lib/librte_ring.a 00:02:20.707 [101/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:20.707 [102/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:20.707 [103/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:20.707 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:20.707 [105/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:20.707 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.707 [107/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:20.707 [108/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:20.707 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.707 [110/268] Linking static target lib/librte_net.a 00:02:20.707 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:20.707 [112/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:20.707 [113/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:20.707 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:20.707 [115/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:20.707 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:20.707 [117/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:20.707 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.707 [119/268] Linking static target lib/librte_mempool.a 00:02:20.707 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:20.707 [121/268] Linking static target lib/librte_eal.a 00:02:20.707 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:20.707 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:20.707 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:20.707 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:20.707 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:20.707 [127/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:20.707 [128/268] Linking static target lib/librte_rcu.a 00:02:20.707 [129/268] Linking static target lib/librte_cmdline.a 00:02:20.707 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:20.707 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:20.707 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:20.707 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:20.707 [134/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.707 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:20.707 [136/268] Linking target lib/librte_log.so.24.1 00:02:20.707 [137/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:20.707 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:20.707 [139/268] Linking static target lib/librte_mbuf.a 00:02:20.707 [140/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.707 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:20.707 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.967 [143/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:20.967 [144/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.967 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.967 [146/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:20.967 [147/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:20.967 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:20.967 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:20.967 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:20.967 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:20.967 [152/268] Linking static target lib/librte_dmadev.a 00:02:20.967 [153/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:20.967 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:20.967 [155/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:20.967 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:20.967 [157/268] Linking target lib/librte_kvargs.so.24.1 00:02:20.967 [158/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:20.967 [159/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:20.967 [160/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:20.967 [161/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:20.967 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:20.967 [163/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.967 [164/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.967 [165/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:20.967 [166/268] Linking static target lib/librte_timer.a 00:02:20.967 [167/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:20.967 [168/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.967 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:20.967 [170/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:20.967 [171/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.967 [172/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:20.967 [173/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:20.967 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:20.967 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:20.967 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:20.967 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:20.967 [178/268] Linking static target lib/librte_power.a 00:02:20.967 [179/268] Linking target lib/librte_telemetry.so.24.1 00:02:20.967 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:20.967 [181/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:20.967 [182/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:21.225 [183/268] Linking static target lib/librte_compressdev.a 00:02:21.225 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:21.225 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:21.225 [186/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:21.225 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:21.225 [188/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:21.225 [189/268] Linking static target lib/librte_reorder.a 00:02:21.225 [190/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:21.225 [191/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.225 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:21.225 [193/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:21.225 [194/268] Linking static target drivers/librte_bus_vdev.a 00:02:21.225 [195/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:21.225 [196/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:21.225 [197/268] Linking static target lib/librte_security.a 00:02:21.225 [198/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:21.225 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:21.225 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:21.225 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.225 [202/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:21.225 [203/268] Linking static target drivers/librte_mempool_ring.a 00:02:21.225 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:21.225 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:21.225 [206/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:21.225 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.483 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:21.483 [209/268] Linking static target lib/librte_hash.a 00:02:21.483 [210/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:21.483 [211/268] Linking static target drivers/librte_bus_pci.a 00:02:21.483 [212/268] Linking static target lib/librte_cryptodev.a 00:02:21.483 [213/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.483 [214/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.483 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.483 [216/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.483 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.483 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.742 [219/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.742 [220/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.742 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.742 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:21.742 [223/268] Linking static target lib/librte_ethdev.a 00:02:22.000 [224/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:22.000 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.000 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.262 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.832 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:22.832 [229/268] Linking static target lib/librte_vhost.a 00:02:23.091 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.990 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.325 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.325 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.582 [234/268] Linking target lib/librte_eal.so.24.1 00:02:30.582 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:30.582 [236/268] Linking target lib/librte_ring.so.24.1 00:02:30.582 [237/268] Linking target lib/librte_pci.so.24.1 00:02:30.582 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:30.582 [239/268] Linking target lib/librte_meter.so.24.1 00:02:30.582 [240/268] Linking target lib/librte_timer.so.24.1 00:02:30.582 [241/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:30.841 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:30.841 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:30.841 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:30.841 [245/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:30.841 [246/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:30.841 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:30.841 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:30.841 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:31.100 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:31.100 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:31.100 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:31.100 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:31.100 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:31.100 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:31.100 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:31.100 [257/268] Linking target lib/librte_net.so.24.1 00:02:31.100 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:31.359 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:31.359 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:31.359 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:31.359 [262/268] Linking target lib/librte_hash.so.24.1 00:02:31.359 [263/268] Linking target lib/librte_security.so.24.1 00:02:31.359 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:31.618 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:31.618 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:31.618 [267/268] Linking target lib/librte_power.so.24.1 00:02:31.618 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:31.618 INFO: autodetecting backend as ninja 00:02:31.618 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:32.554 CC lib/ut_mock/mock.o 00:02:32.554 CC lib/log/log.o 00:02:32.554 CC lib/log/log_flags.o 00:02:32.554 CC lib/log/log_deprecated.o 00:02:32.554 CC lib/ut/ut.o 00:02:32.812 LIB libspdk_ut_mock.a 00:02:32.812 LIB libspdk_log.a 00:02:32.812 SO libspdk_ut_mock.so.6.0 00:02:32.812 LIB libspdk_ut.a 00:02:32.812 SO libspdk_log.so.7.0 00:02:32.812 SO libspdk_ut.so.2.0 00:02:32.812 SYMLINK libspdk_ut_mock.so 00:02:32.812 SYMLINK libspdk_log.so 00:02:32.812 SYMLINK libspdk_ut.so 00:02:33.070 CXX lib/trace_parser/trace.o 00:02:33.070 CC lib/util/base64.o 00:02:33.070 CC lib/util/bit_array.o 00:02:33.070 CC lib/util/cpuset.o 00:02:33.071 CC lib/util/crc16.o 00:02:33.071 CC lib/util/crc32.o 00:02:33.071 CC lib/util/crc64.o 00:02:33.071 CC lib/util/crc32c.o 00:02:33.071 CC lib/util/crc32_ieee.o 00:02:33.071 CC lib/dma/dma.o 00:02:33.071 CC lib/util/fd.o 00:02:33.071 CC lib/util/dif.o 00:02:33.071 CC lib/util/fd_group.o 00:02:33.071 CC lib/util/file.o 00:02:33.071 CC lib/util/iov.o 00:02:33.071 CC lib/util/hexlify.o 00:02:33.071 CC lib/util/net.o 00:02:33.071 CC lib/util/math.o 00:02:33.071 CC lib/util/pipe.o 00:02:33.071 CC lib/util/strerror_tls.o 00:02:33.071 CC lib/util/string.o 00:02:33.071 CC lib/util/uuid.o 00:02:33.071 CC lib/util/xor.o 00:02:33.071 CC lib/util/zipf.o 00:02:33.071 CC lib/ioat/ioat.o 00:02:33.329 CC lib/vfio_user/host/vfio_user_pci.o 00:02:33.329 CC lib/vfio_user/host/vfio_user.o 00:02:33.329 LIB libspdk_dma.a 00:02:33.329 SO libspdk_dma.so.4.0 00:02:33.329 SYMLINK libspdk_dma.so 00:02:33.329 LIB libspdk_ioat.a 00:02:33.329 SO libspdk_ioat.so.7.0 00:02:33.588 LIB libspdk_vfio_user.a 00:02:33.588 SYMLINK libspdk_ioat.so 00:02:33.588 LIB libspdk_util.a 00:02:33.588 SO libspdk_vfio_user.so.5.0 00:02:33.588 SO libspdk_util.so.10.0 00:02:33.588 SYMLINK libspdk_vfio_user.so 00:02:33.588 SYMLINK libspdk_util.so 00:02:33.846 LIB libspdk_trace_parser.a 00:02:33.846 SO libspdk_trace_parser.so.5.0 00:02:33.846 SYMLINK libspdk_trace_parser.so 00:02:34.104 CC lib/idxd/idxd_user.o 00:02:34.104 CC lib/idxd/idxd.o 00:02:34.104 CC lib/rdma_provider/common.o 00:02:34.104 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:34.104 CC lib/idxd/idxd_kernel.o 00:02:34.104 CC lib/rdma_utils/rdma_utils.o 00:02:34.104 CC lib/json/json_parse.o 00:02:34.104 CC lib/json/json_write.o 00:02:34.104 CC lib/json/json_util.o 00:02:34.104 CC lib/vmd/vmd.o 00:02:34.104 CC lib/vmd/led.o 00:02:34.104 CC lib/conf/conf.o 00:02:34.104 CC lib/env_dpdk/memory.o 00:02:34.104 CC lib/env_dpdk/env.o 00:02:34.104 CC lib/env_dpdk/init.o 00:02:34.104 CC lib/env_dpdk/pci.o 00:02:34.104 CC lib/env_dpdk/threads.o 00:02:34.104 CC lib/env_dpdk/pci_ioat.o 00:02:34.104 CC lib/env_dpdk/pci_virtio.o 00:02:34.104 CC lib/env_dpdk/pci_vmd.o 00:02:34.104 CC lib/env_dpdk/pci_idxd.o 00:02:34.104 CC lib/env_dpdk/pci_event.o 00:02:34.104 CC lib/env_dpdk/sigbus_handler.o 00:02:34.104 CC lib/env_dpdk/pci_dpdk.o 00:02:34.104 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:34.104 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:34.104 LIB libspdk_rdma_provider.a 00:02:34.362 SO libspdk_rdma_provider.so.6.0 00:02:34.362 LIB libspdk_conf.a 00:02:34.362 LIB libspdk_rdma_utils.a 00:02:34.362 SO libspdk_conf.so.6.0 00:02:34.362 SO libspdk_rdma_utils.so.1.0 00:02:34.362 LIB libspdk_json.a 00:02:34.362 SYMLINK libspdk_rdma_provider.so 00:02:34.362 SYMLINK libspdk_conf.so 00:02:34.362 SO libspdk_json.so.6.0 00:02:34.362 SYMLINK libspdk_rdma_utils.so 00:02:34.362 SYMLINK libspdk_json.so 00:02:34.362 LIB libspdk_idxd.a 00:02:34.619 SO libspdk_idxd.so.12.0 00:02:34.619 LIB libspdk_vmd.a 00:02:34.619 SO libspdk_vmd.so.6.0 00:02:34.619 SYMLINK libspdk_idxd.so 00:02:34.619 SYMLINK libspdk_vmd.so 00:02:34.619 CC lib/jsonrpc/jsonrpc_server.o 00:02:34.619 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:34.619 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:34.619 CC lib/jsonrpc/jsonrpc_client.o 00:02:34.878 LIB libspdk_jsonrpc.a 00:02:34.878 SO libspdk_jsonrpc.so.6.0 00:02:34.878 SYMLINK libspdk_jsonrpc.so 00:02:35.136 LIB libspdk_env_dpdk.a 00:02:35.136 SO libspdk_env_dpdk.so.15.0 00:02:35.136 SYMLINK libspdk_env_dpdk.so 00:02:35.394 CC lib/rpc/rpc.o 00:02:35.394 LIB libspdk_rpc.a 00:02:35.394 SO libspdk_rpc.so.6.0 00:02:35.652 SYMLINK libspdk_rpc.so 00:02:35.910 CC lib/notify/notify.o 00:02:35.910 CC lib/notify/notify_rpc.o 00:02:35.910 CC lib/trace/trace.o 00:02:35.910 CC lib/trace/trace_flags.o 00:02:35.910 CC lib/trace/trace_rpc.o 00:02:35.910 CC lib/keyring/keyring.o 00:02:35.910 CC lib/keyring/keyring_rpc.o 00:02:35.910 LIB libspdk_notify.a 00:02:35.910 SO libspdk_notify.so.6.0 00:02:35.910 LIB libspdk_trace.a 00:02:36.170 SYMLINK libspdk_notify.so 00:02:36.170 LIB libspdk_keyring.a 00:02:36.170 SO libspdk_trace.so.10.0 00:02:36.170 SO libspdk_keyring.so.1.0 00:02:36.170 SYMLINK libspdk_trace.so 00:02:36.170 SYMLINK libspdk_keyring.so 00:02:36.430 CC lib/thread/thread.o 00:02:36.430 CC lib/sock/sock.o 00:02:36.430 CC lib/thread/iobuf.o 00:02:36.430 CC lib/sock/sock_rpc.o 00:02:36.688 LIB libspdk_sock.a 00:02:36.688 SO libspdk_sock.so.10.0 00:02:36.688 SYMLINK libspdk_sock.so 00:02:37.255 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:37.255 CC lib/nvme/nvme_ctrlr.o 00:02:37.255 CC lib/nvme/nvme_fabric.o 00:02:37.255 CC lib/nvme/nvme_ns_cmd.o 00:02:37.255 CC lib/nvme/nvme_ns.o 00:02:37.255 CC lib/nvme/nvme_pcie_common.o 00:02:37.255 CC lib/nvme/nvme_pcie.o 00:02:37.255 CC lib/nvme/nvme_qpair.o 00:02:37.255 CC lib/nvme/nvme.o 00:02:37.255 CC lib/nvme/nvme_quirks.o 00:02:37.255 CC lib/nvme/nvme_transport.o 00:02:37.255 CC lib/nvme/nvme_discovery.o 00:02:37.255 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:37.255 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:37.255 CC lib/nvme/nvme_opal.o 00:02:37.255 CC lib/nvme/nvme_tcp.o 00:02:37.255 CC lib/nvme/nvme_io_msg.o 00:02:37.255 CC lib/nvme/nvme_zns.o 00:02:37.255 CC lib/nvme/nvme_stubs.o 00:02:37.255 CC lib/nvme/nvme_poll_group.o 00:02:37.255 CC lib/nvme/nvme_auth.o 00:02:37.255 CC lib/nvme/nvme_cuse.o 00:02:37.255 CC lib/nvme/nvme_vfio_user.o 00:02:37.255 CC lib/nvme/nvme_rdma.o 00:02:37.512 LIB libspdk_thread.a 00:02:37.512 SO libspdk_thread.so.10.1 00:02:37.512 SYMLINK libspdk_thread.so 00:02:37.769 CC lib/vfu_tgt/tgt_endpoint.o 00:02:37.769 CC lib/vfu_tgt/tgt_rpc.o 00:02:37.769 CC lib/init/subsystem_rpc.o 00:02:37.769 CC lib/init/json_config.o 00:02:37.769 CC lib/init/subsystem.o 00:02:37.769 CC lib/init/rpc.o 00:02:37.769 CC lib/accel/accel.o 00:02:37.769 CC lib/accel/accel_sw.o 00:02:37.769 CC lib/accel/accel_rpc.o 00:02:37.769 CC lib/blob/blobstore.o 00:02:37.769 CC lib/blob/zeroes.o 00:02:37.769 CC lib/blob/request.o 00:02:37.769 CC lib/blob/blob_bs_dev.o 00:02:37.769 CC lib/virtio/virtio.o 00:02:37.769 CC lib/virtio/virtio_vfio_user.o 00:02:37.769 CC lib/virtio/virtio_pci.o 00:02:37.769 CC lib/virtio/virtio_vhost_user.o 00:02:38.028 LIB libspdk_init.a 00:02:38.028 SO libspdk_init.so.5.0 00:02:38.028 LIB libspdk_vfu_tgt.a 00:02:38.028 SO libspdk_vfu_tgt.so.3.0 00:02:38.028 LIB libspdk_virtio.a 00:02:38.028 SYMLINK libspdk_init.so 00:02:38.286 SO libspdk_virtio.so.7.0 00:02:38.286 SYMLINK libspdk_vfu_tgt.so 00:02:38.286 SYMLINK libspdk_virtio.so 00:02:38.543 CC lib/event/app.o 00:02:38.543 CC lib/event/reactor.o 00:02:38.543 CC lib/event/log_rpc.o 00:02:38.543 CC lib/event/app_rpc.o 00:02:38.543 CC lib/event/scheduler_static.o 00:02:38.543 LIB libspdk_accel.a 00:02:38.543 SO libspdk_accel.so.16.0 00:02:38.543 SYMLINK libspdk_accel.so 00:02:38.802 LIB libspdk_nvme.a 00:02:38.802 LIB libspdk_event.a 00:02:38.802 SO libspdk_nvme.so.13.1 00:02:38.802 SO libspdk_event.so.14.0 00:02:38.802 SYMLINK libspdk_event.so 00:02:38.802 CC lib/bdev/bdev.o 00:02:38.802 CC lib/bdev/bdev_rpc.o 00:02:38.802 CC lib/bdev/bdev_zone.o 00:02:38.802 CC lib/bdev/part.o 00:02:38.802 CC lib/bdev/scsi_nvme.o 00:02:39.059 SYMLINK libspdk_nvme.so 00:02:39.994 LIB libspdk_blob.a 00:02:39.994 SO libspdk_blob.so.11.0 00:02:39.994 SYMLINK libspdk_blob.so 00:02:40.252 CC lib/blobfs/blobfs.o 00:02:40.252 CC lib/blobfs/tree.o 00:02:40.252 CC lib/lvol/lvol.o 00:02:40.817 LIB libspdk_bdev.a 00:02:40.817 SO libspdk_bdev.so.16.0 00:02:40.817 SYMLINK libspdk_bdev.so 00:02:40.817 LIB libspdk_blobfs.a 00:02:40.817 SO libspdk_blobfs.so.10.0 00:02:40.817 LIB libspdk_lvol.a 00:02:40.817 SYMLINK libspdk_blobfs.so 00:02:41.075 SO libspdk_lvol.so.10.0 00:02:41.075 SYMLINK libspdk_lvol.so 00:02:41.075 CC lib/scsi/dev.o 00:02:41.075 CC lib/scsi/lun.o 00:02:41.075 CC lib/scsi/port.o 00:02:41.075 CC lib/scsi/scsi.o 00:02:41.075 CC lib/nvmf/ctrlr.o 00:02:41.075 CC lib/scsi/scsi_bdev.o 00:02:41.075 CC lib/nvmf/ctrlr_discovery.o 00:02:41.075 CC lib/scsi/scsi_pr.o 00:02:41.075 CC lib/scsi/scsi_rpc.o 00:02:41.075 CC lib/nvmf/ctrlr_bdev.o 00:02:41.075 CC lib/nvmf/subsystem.o 00:02:41.075 CC lib/nvmf/nvmf.o 00:02:41.075 CC lib/scsi/task.o 00:02:41.075 CC lib/nvmf/nvmf_rpc.o 00:02:41.075 CC lib/nvmf/transport.o 00:02:41.075 CC lib/nvmf/tcp.o 00:02:41.075 CC lib/nvmf/stubs.o 00:02:41.075 CC lib/nvmf/mdns_server.o 00:02:41.075 CC lib/nvmf/vfio_user.o 00:02:41.075 CC lib/nvmf/rdma.o 00:02:41.075 CC lib/nvmf/auth.o 00:02:41.075 CC lib/ftl/ftl_init.o 00:02:41.075 CC lib/ftl/ftl_core.o 00:02:41.075 CC lib/ftl/ftl_layout.o 00:02:41.075 CC lib/nbd/nbd.o 00:02:41.075 CC lib/ftl/ftl_debug.o 00:02:41.075 CC lib/nbd/nbd_rpc.o 00:02:41.075 CC lib/ftl/ftl_io.o 00:02:41.075 CC lib/ftl/ftl_sb.o 00:02:41.075 CC lib/ublk/ublk.o 00:02:41.075 CC lib/ftl/ftl_l2p.o 00:02:41.075 CC lib/ftl/ftl_l2p_flat.o 00:02:41.075 CC lib/ublk/ublk_rpc.o 00:02:41.075 CC lib/ftl/ftl_band.o 00:02:41.075 CC lib/ftl/ftl_nv_cache.o 00:02:41.075 CC lib/ftl/ftl_band_ops.o 00:02:41.075 CC lib/ftl/ftl_rq.o 00:02:41.075 CC lib/ftl/ftl_writer.o 00:02:41.075 CC lib/ftl/ftl_reloc.o 00:02:41.075 CC lib/ftl/ftl_p2l.o 00:02:41.075 CC lib/ftl/ftl_l2p_cache.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:41.075 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:41.075 CC lib/ftl/utils/ftl_conf.o 00:02:41.075 CC lib/ftl/utils/ftl_md.o 00:02:41.075 CC lib/ftl/utils/ftl_mempool.o 00:02:41.075 CC lib/ftl/utils/ftl_bitmap.o 00:02:41.075 CC lib/ftl/utils/ftl_property.o 00:02:41.075 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:41.075 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:41.075 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:41.075 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:41.075 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:41.075 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:41.075 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:41.075 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:41.075 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:41.075 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:41.075 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:41.075 CC lib/ftl/base/ftl_base_dev.o 00:02:41.075 CC lib/ftl/base/ftl_base_bdev.o 00:02:41.076 CC lib/ftl/ftl_trace.o 00:02:41.641 LIB libspdk_scsi.a 00:02:41.641 LIB libspdk_nbd.a 00:02:41.641 SO libspdk_nbd.so.7.0 00:02:41.641 SO libspdk_scsi.so.9.0 00:02:41.899 SYMLINK libspdk_nbd.so 00:02:41.899 SYMLINK libspdk_scsi.so 00:02:41.899 LIB libspdk_ublk.a 00:02:41.899 SO libspdk_ublk.so.3.0 00:02:41.899 SYMLINK libspdk_ublk.so 00:02:42.158 LIB libspdk_ftl.a 00:02:42.158 CC lib/vhost/vhost_rpc.o 00:02:42.158 CC lib/vhost/vhost.o 00:02:42.158 CC lib/vhost/vhost_scsi.o 00:02:42.158 CC lib/vhost/vhost_blk.o 00:02:42.158 CC lib/iscsi/conn.o 00:02:42.158 CC lib/vhost/rte_vhost_user.o 00:02:42.158 CC lib/iscsi/init_grp.o 00:02:42.158 CC lib/iscsi/iscsi.o 00:02:42.158 CC lib/iscsi/param.o 00:02:42.158 CC lib/iscsi/md5.o 00:02:42.158 CC lib/iscsi/portal_grp.o 00:02:42.158 CC lib/iscsi/tgt_node.o 00:02:42.158 CC lib/iscsi/iscsi_subsystem.o 00:02:42.158 CC lib/iscsi/iscsi_rpc.o 00:02:42.158 CC lib/iscsi/task.o 00:02:42.158 SO libspdk_ftl.so.9.0 00:02:42.417 SYMLINK libspdk_ftl.so 00:02:42.986 LIB libspdk_nvmf.a 00:02:42.986 SO libspdk_nvmf.so.19.0 00:02:42.986 LIB libspdk_vhost.a 00:02:42.986 SO libspdk_vhost.so.8.0 00:02:42.986 SYMLINK libspdk_nvmf.so 00:02:42.986 SYMLINK libspdk_vhost.so 00:02:42.986 LIB libspdk_iscsi.a 00:02:43.246 SO libspdk_iscsi.so.8.0 00:02:43.246 SYMLINK libspdk_iscsi.so 00:02:43.813 CC module/vfu_device/vfu_virtio.o 00:02:43.813 CC module/vfu_device/vfu_virtio_scsi.o 00:02:43.813 CC module/vfu_device/vfu_virtio_blk.o 00:02:43.813 CC module/vfu_device/vfu_virtio_rpc.o 00:02:43.813 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.813 CC module/sock/posix/posix.o 00:02:43.813 CC module/blob/bdev/blob_bdev.o 00:02:43.813 CC module/accel/iaa/accel_iaa.o 00:02:43.813 CC module/accel/iaa/accel_iaa_rpc.o 00:02:43.813 CC module/keyring/linux/keyring.o 00:02:43.813 CC module/keyring/linux/keyring_rpc.o 00:02:43.813 CC module/accel/ioat/accel_ioat.o 00:02:43.813 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.813 CC module/scheduler/gscheduler/gscheduler.o 00:02:43.813 CC module/keyring/file/keyring.o 00:02:43.813 CC module/keyring/file/keyring_rpc.o 00:02:43.813 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.813 CC module/accel/dsa/accel_dsa.o 00:02:43.813 LIB libspdk_env_dpdk_rpc.a 00:02:43.813 CC module/accel/error/accel_error.o 00:02:43.813 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:43.813 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.813 CC module/accel/error/accel_error_rpc.o 00:02:44.072 SO libspdk_env_dpdk_rpc.so.6.0 00:02:44.072 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.072 LIB libspdk_keyring_linux.a 00:02:44.072 LIB libspdk_keyring_file.a 00:02:44.072 LIB libspdk_scheduler_gscheduler.a 00:02:44.072 SO libspdk_keyring_linux.so.1.0 00:02:44.072 LIB libspdk_accel_iaa.a 00:02:44.072 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.072 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.072 SO libspdk_keyring_file.so.1.0 00:02:44.072 LIB libspdk_accel_ioat.a 00:02:44.072 LIB libspdk_scheduler_dynamic.a 00:02:44.072 LIB libspdk_accel_error.a 00:02:44.072 SO libspdk_accel_iaa.so.3.0 00:02:44.072 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.072 LIB libspdk_blob_bdev.a 00:02:44.072 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.072 SO libspdk_accel_ioat.so.6.0 00:02:44.072 SYMLINK libspdk_keyring_linux.so 00:02:44.072 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.072 SO libspdk_accel_error.so.2.0 00:02:44.072 SYMLINK libspdk_keyring_file.so 00:02:44.072 LIB libspdk_accel_dsa.a 00:02:44.072 SO libspdk_blob_bdev.so.11.0 00:02:44.072 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.072 SYMLINK libspdk_accel_iaa.so 00:02:44.072 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.072 SO libspdk_accel_dsa.so.5.0 00:02:44.072 SYMLINK libspdk_accel_ioat.so 00:02:44.072 SYMLINK libspdk_blob_bdev.so 00:02:44.330 SYMLINK libspdk_accel_error.so 00:02:44.330 SYMLINK libspdk_accel_dsa.so 00:02:44.330 LIB libspdk_vfu_device.a 00:02:44.330 SO libspdk_vfu_device.so.3.0 00:02:44.330 SYMLINK libspdk_vfu_device.so 00:02:44.330 LIB libspdk_sock_posix.a 00:02:44.589 SO libspdk_sock_posix.so.6.0 00:02:44.589 SYMLINK libspdk_sock_posix.so 00:02:44.589 CC module/bdev/delay/vbdev_delay.o 00:02:44.589 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.589 CC module/bdev/nvme/bdev_nvme.o 00:02:44.589 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.589 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.589 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.589 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.589 CC module/bdev/nvme/nvme_rpc.o 00:02:44.589 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.589 CC module/bdev/nvme/vbdev_opal.o 00:02:44.589 CC module/bdev/aio/bdev_aio.o 00:02:44.589 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.589 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.589 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:44.589 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.589 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.589 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.589 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.589 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.589 CC module/bdev/error/vbdev_error.o 00:02:44.589 CC module/bdev/gpt/gpt.o 00:02:44.589 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.589 CC module/bdev/null/bdev_null.o 00:02:44.589 CC module/bdev/split/vbdev_split.o 00:02:44.589 CC module/bdev/null/bdev_null_rpc.o 00:02:44.589 CC module/bdev/split/vbdev_split_rpc.o 00:02:44.589 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.589 CC module/bdev/raid/bdev_raid_rpc.o 00:02:44.589 CC module/bdev/raid/bdev_raid.o 00:02:44.589 CC module/bdev/raid/bdev_raid_sb.o 00:02:44.589 CC module/bdev/raid/raid0.o 00:02:44.589 CC module/bdev/raid/raid1.o 00:02:44.589 CC module/bdev/ftl/bdev_ftl.o 00:02:44.589 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.589 CC module/bdev/raid/concat.o 00:02:44.589 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:44.589 CC module/bdev/iscsi/bdev_iscsi.o 00:02:44.589 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.589 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.589 CC module/bdev/malloc/bdev_malloc.o 00:02:44.589 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.589 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:44.847 LIB libspdk_blobfs_bdev.a 00:02:44.847 SO libspdk_blobfs_bdev.so.6.0 00:02:44.847 LIB libspdk_bdev_split.a 00:02:44.847 LIB libspdk_bdev_error.a 00:02:44.847 LIB libspdk_bdev_null.a 00:02:44.847 SO libspdk_bdev_split.so.6.0 00:02:44.847 LIB libspdk_bdev_gpt.a 00:02:44.847 SYMLINK libspdk_blobfs_bdev.so 00:02:44.847 SO libspdk_bdev_error.so.6.0 00:02:45.106 SO libspdk_bdev_null.so.6.0 00:02:45.106 LIB libspdk_bdev_ftl.a 00:02:45.106 LIB libspdk_bdev_passthru.a 00:02:45.106 SO libspdk_bdev_gpt.so.6.0 00:02:45.106 LIB libspdk_bdev_aio.a 00:02:45.106 LIB libspdk_bdev_delay.a 00:02:45.106 SYMLINK libspdk_bdev_split.so 00:02:45.106 LIB libspdk_bdev_zone_block.a 00:02:45.106 SO libspdk_bdev_passthru.so.6.0 00:02:45.106 SO libspdk_bdev_ftl.so.6.0 00:02:45.106 LIB libspdk_bdev_malloc.a 00:02:45.106 SO libspdk_bdev_delay.so.6.0 00:02:45.106 SYMLINK libspdk_bdev_error.so 00:02:45.106 SO libspdk_bdev_aio.so.6.0 00:02:45.106 SYMLINK libspdk_bdev_gpt.so 00:02:45.106 SYMLINK libspdk_bdev_null.so 00:02:45.106 SO libspdk_bdev_zone_block.so.6.0 00:02:45.106 LIB libspdk_bdev_iscsi.a 00:02:45.106 SO libspdk_bdev_malloc.so.6.0 00:02:45.106 SO libspdk_bdev_iscsi.so.6.0 00:02:45.106 SYMLINK libspdk_bdev_passthru.so 00:02:45.106 SYMLINK libspdk_bdev_delay.so 00:02:45.106 SYMLINK libspdk_bdev_ftl.so 00:02:45.106 SYMLINK libspdk_bdev_zone_block.so 00:02:45.106 SYMLINK libspdk_bdev_aio.so 00:02:45.106 SYMLINK libspdk_bdev_malloc.so 00:02:45.106 SYMLINK libspdk_bdev_iscsi.so 00:02:45.106 LIB libspdk_bdev_lvol.a 00:02:45.106 LIB libspdk_bdev_virtio.a 00:02:45.106 SO libspdk_bdev_lvol.so.6.0 00:02:45.106 SO libspdk_bdev_virtio.so.6.0 00:02:45.365 SYMLINK libspdk_bdev_lvol.so 00:02:45.365 SYMLINK libspdk_bdev_virtio.so 00:02:45.365 LIB libspdk_bdev_raid.a 00:02:45.624 SO libspdk_bdev_raid.so.6.0 00:02:45.624 SYMLINK libspdk_bdev_raid.so 00:02:46.191 LIB libspdk_bdev_nvme.a 00:02:46.191 SO libspdk_bdev_nvme.so.7.0 00:02:46.450 SYMLINK libspdk_bdev_nvme.so 00:02:47.051 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.051 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.051 CC module/event/subsystems/keyring/keyring.o 00:02:47.051 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.051 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:47.051 CC module/event/subsystems/sock/sock.o 00:02:47.051 CC module/event/subsystems/vmd/vmd.o 00:02:47.051 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.051 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.051 LIB libspdk_event_keyring.a 00:02:47.051 LIB libspdk_event_vhost_blk.a 00:02:47.051 LIB libspdk_event_iobuf.a 00:02:47.051 LIB libspdk_event_vmd.a 00:02:47.051 SO libspdk_event_vhost_blk.so.3.0 00:02:47.051 LIB libspdk_event_vfu_tgt.a 00:02:47.051 LIB libspdk_event_sock.a 00:02:47.051 SO libspdk_event_keyring.so.1.0 00:02:47.051 LIB libspdk_event_scheduler.a 00:02:47.051 SO libspdk_event_iobuf.so.3.0 00:02:47.051 SO libspdk_event_vmd.so.6.0 00:02:47.051 SO libspdk_event_vfu_tgt.so.3.0 00:02:47.051 SO libspdk_event_sock.so.5.0 00:02:47.051 SO libspdk_event_scheduler.so.4.0 00:02:47.051 SYMLINK libspdk_event_vhost_blk.so 00:02:47.051 SYMLINK libspdk_event_keyring.so 00:02:47.309 SYMLINK libspdk_event_iobuf.so 00:02:47.309 SYMLINK libspdk_event_vmd.so 00:02:47.309 SYMLINK libspdk_event_sock.so 00:02:47.309 SYMLINK libspdk_event_vfu_tgt.so 00:02:47.309 SYMLINK libspdk_event_scheduler.so 00:02:47.568 CC module/event/subsystems/accel/accel.o 00:02:47.568 LIB libspdk_event_accel.a 00:02:47.568 SO libspdk_event_accel.so.6.0 00:02:47.827 SYMLINK libspdk_event_accel.so 00:02:48.086 CC module/event/subsystems/bdev/bdev.o 00:02:48.086 LIB libspdk_event_bdev.a 00:02:48.086 SO libspdk_event_bdev.so.6.0 00:02:48.344 SYMLINK libspdk_event_bdev.so 00:02:48.603 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:48.603 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:48.603 CC module/event/subsystems/scsi/scsi.o 00:02:48.603 CC module/event/subsystems/nbd/nbd.o 00:02:48.603 CC module/event/subsystems/ublk/ublk.o 00:02:48.603 LIB libspdk_event_scsi.a 00:02:48.603 LIB libspdk_event_ublk.a 00:02:48.603 LIB libspdk_event_nbd.a 00:02:48.603 SO libspdk_event_scsi.so.6.0 00:02:48.603 SO libspdk_event_nbd.so.6.0 00:02:48.603 SO libspdk_event_ublk.so.3.0 00:02:48.862 LIB libspdk_event_nvmf.a 00:02:48.862 SYMLINK libspdk_event_scsi.so 00:02:48.862 SYMLINK libspdk_event_nbd.so 00:02:48.862 SO libspdk_event_nvmf.so.6.0 00:02:48.862 SYMLINK libspdk_event_ublk.so 00:02:48.862 SYMLINK libspdk_event_nvmf.so 00:02:49.121 CC module/event/subsystems/iscsi/iscsi.o 00:02:49.121 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:49.121 LIB libspdk_event_vhost_scsi.a 00:02:49.121 LIB libspdk_event_iscsi.a 00:02:49.121 SO libspdk_event_vhost_scsi.so.3.0 00:02:49.121 SO libspdk_event_iscsi.so.6.0 00:02:49.380 SYMLINK libspdk_event_iscsi.so 00:02:49.380 SYMLINK libspdk_event_vhost_scsi.so 00:02:49.380 SO libspdk.so.6.0 00:02:49.380 SYMLINK libspdk.so 00:02:49.638 CC app/spdk_lspci/spdk_lspci.o 00:02:49.638 CXX app/trace/trace.o 00:02:49.638 CC app/spdk_nvme_identify/identify.o 00:02:49.638 CC app/spdk_nvme_perf/perf.o 00:02:49.638 CC app/spdk_top/spdk_top.o 00:02:49.638 CC app/trace_record/trace_record.o 00:02:49.638 TEST_HEADER include/spdk/accel.h 00:02:49.638 CC test/rpc_client/rpc_client_test.o 00:02:49.638 TEST_HEADER include/spdk/accel_module.h 00:02:49.638 TEST_HEADER include/spdk/assert.h 00:02:49.638 TEST_HEADER include/spdk/barrier.h 00:02:49.638 TEST_HEADER include/spdk/bdev_module.h 00:02:49.638 TEST_HEADER include/spdk/bdev.h 00:02:49.638 TEST_HEADER include/spdk/bdev_zone.h 00:02:49.638 TEST_HEADER include/spdk/bit_array.h 00:02:49.638 TEST_HEADER include/spdk/base64.h 00:02:49.638 TEST_HEADER include/spdk/bit_pool.h 00:02:49.638 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:49.638 TEST_HEADER include/spdk/blob_bdev.h 00:02:49.638 TEST_HEADER include/spdk/blobfs.h 00:02:49.638 TEST_HEADER include/spdk/conf.h 00:02:49.638 TEST_HEADER include/spdk/blob.h 00:02:49.638 TEST_HEADER include/spdk/config.h 00:02:49.638 TEST_HEADER include/spdk/cpuset.h 00:02:49.902 TEST_HEADER include/spdk/crc64.h 00:02:49.902 TEST_HEADER include/spdk/crc16.h 00:02:49.902 CC app/spdk_nvme_discover/discovery_aer.o 00:02:49.902 TEST_HEADER include/spdk/crc32.h 00:02:49.902 TEST_HEADER include/spdk/env_dpdk.h 00:02:49.902 TEST_HEADER include/spdk/dif.h 00:02:49.902 TEST_HEADER include/spdk/dma.h 00:02:49.902 TEST_HEADER include/spdk/endian.h 00:02:49.902 TEST_HEADER include/spdk/env.h 00:02:49.902 TEST_HEADER include/spdk/event.h 00:02:49.902 TEST_HEADER include/spdk/fd_group.h 00:02:49.902 TEST_HEADER include/spdk/fd.h 00:02:49.902 TEST_HEADER include/spdk/file.h 00:02:49.902 TEST_HEADER include/spdk/ftl.h 00:02:49.902 TEST_HEADER include/spdk/gpt_spec.h 00:02:49.902 TEST_HEADER include/spdk/hexlify.h 00:02:49.902 TEST_HEADER include/spdk/histogram_data.h 00:02:49.902 TEST_HEADER include/spdk/idxd.h 00:02:49.902 TEST_HEADER include/spdk/idxd_spec.h 00:02:49.902 TEST_HEADER include/spdk/init.h 00:02:49.902 TEST_HEADER include/spdk/ioat.h 00:02:49.902 TEST_HEADER include/spdk/ioat_spec.h 00:02:49.902 TEST_HEADER include/spdk/jsonrpc.h 00:02:49.902 TEST_HEADER include/spdk/json.h 00:02:49.902 TEST_HEADER include/spdk/iscsi_spec.h 00:02:49.902 TEST_HEADER include/spdk/keyring_module.h 00:02:49.902 TEST_HEADER include/spdk/keyring.h 00:02:49.902 TEST_HEADER include/spdk/likely.h 00:02:49.902 TEST_HEADER include/spdk/log.h 00:02:49.902 TEST_HEADER include/spdk/memory.h 00:02:49.902 TEST_HEADER include/spdk/nbd.h 00:02:49.902 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:49.902 TEST_HEADER include/spdk/mmio.h 00:02:49.902 TEST_HEADER include/spdk/lvol.h 00:02:49.902 TEST_HEADER include/spdk/net.h 00:02:49.902 TEST_HEADER include/spdk/notify.h 00:02:49.902 TEST_HEADER include/spdk/nvme.h 00:02:49.902 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:49.902 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:49.902 TEST_HEADER include/spdk/nvme_intel.h 00:02:49.902 TEST_HEADER include/spdk/nvme_spec.h 00:02:49.902 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:49.902 TEST_HEADER include/spdk/nvme_zns.h 00:02:49.902 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:49.902 TEST_HEADER include/spdk/nvmf.h 00:02:49.902 CC app/spdk_dd/spdk_dd.o 00:02:49.902 TEST_HEADER include/spdk/nvmf_spec.h 00:02:49.902 TEST_HEADER include/spdk/nvmf_transport.h 00:02:49.902 TEST_HEADER include/spdk/opal.h 00:02:49.902 CC app/nvmf_tgt/nvmf_main.o 00:02:49.902 TEST_HEADER include/spdk/opal_spec.h 00:02:49.902 TEST_HEADER include/spdk/pci_ids.h 00:02:49.902 TEST_HEADER include/spdk/pipe.h 00:02:49.902 TEST_HEADER include/spdk/queue.h 00:02:49.902 TEST_HEADER include/spdk/rpc.h 00:02:49.902 TEST_HEADER include/spdk/reduce.h 00:02:49.902 TEST_HEADER include/spdk/scheduler.h 00:02:49.902 TEST_HEADER include/spdk/scsi.h 00:02:49.902 CC app/iscsi_tgt/iscsi_tgt.o 00:02:49.902 TEST_HEADER include/spdk/scsi_spec.h 00:02:49.902 TEST_HEADER include/spdk/stdinc.h 00:02:49.903 TEST_HEADER include/spdk/sock.h 00:02:49.903 TEST_HEADER include/spdk/string.h 00:02:49.903 TEST_HEADER include/spdk/thread.h 00:02:49.903 TEST_HEADER include/spdk/trace.h 00:02:49.903 TEST_HEADER include/spdk/trace_parser.h 00:02:49.903 TEST_HEADER include/spdk/tree.h 00:02:49.903 TEST_HEADER include/spdk/ublk.h 00:02:49.903 TEST_HEADER include/spdk/version.h 00:02:49.903 TEST_HEADER include/spdk/util.h 00:02:49.903 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:49.903 TEST_HEADER include/spdk/uuid.h 00:02:49.903 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:49.903 TEST_HEADER include/spdk/vhost.h 00:02:49.903 TEST_HEADER include/spdk/vmd.h 00:02:49.903 TEST_HEADER include/spdk/xor.h 00:02:49.903 TEST_HEADER include/spdk/zipf.h 00:02:49.903 CXX test/cpp_headers/accel.o 00:02:49.903 CC app/spdk_tgt/spdk_tgt.o 00:02:49.903 CXX test/cpp_headers/accel_module.o 00:02:49.903 CXX test/cpp_headers/barrier.o 00:02:49.903 CXX test/cpp_headers/base64.o 00:02:49.903 CXX test/cpp_headers/assert.o 00:02:49.903 CXX test/cpp_headers/bdev.o 00:02:49.903 CXX test/cpp_headers/bit_array.o 00:02:49.903 CXX test/cpp_headers/bdev_zone.o 00:02:49.903 CXX test/cpp_headers/bdev_module.o 00:02:49.903 CXX test/cpp_headers/bit_pool.o 00:02:49.903 CXX test/cpp_headers/blob_bdev.o 00:02:49.903 CXX test/cpp_headers/blobfs.o 00:02:49.903 CXX test/cpp_headers/blob.o 00:02:49.903 CXX test/cpp_headers/conf.o 00:02:49.903 CXX test/cpp_headers/blobfs_bdev.o 00:02:49.903 CXX test/cpp_headers/config.o 00:02:49.903 CXX test/cpp_headers/cpuset.o 00:02:49.903 CXX test/cpp_headers/crc16.o 00:02:49.903 CXX test/cpp_headers/crc32.o 00:02:49.903 CXX test/cpp_headers/crc64.o 00:02:49.903 CXX test/cpp_headers/dma.o 00:02:49.903 CXX test/cpp_headers/dif.o 00:02:49.903 CXX test/cpp_headers/env.o 00:02:49.903 CXX test/cpp_headers/endian.o 00:02:49.903 CXX test/cpp_headers/fd_group.o 00:02:49.903 CXX test/cpp_headers/env_dpdk.o 00:02:49.903 CXX test/cpp_headers/fd.o 00:02:49.903 CXX test/cpp_headers/event.o 00:02:49.903 CXX test/cpp_headers/file.o 00:02:49.903 CXX test/cpp_headers/ftl.o 00:02:49.903 CXX test/cpp_headers/gpt_spec.o 00:02:49.903 CXX test/cpp_headers/hexlify.o 00:02:49.903 CXX test/cpp_headers/histogram_data.o 00:02:49.903 CXX test/cpp_headers/idxd.o 00:02:49.903 CXX test/cpp_headers/idxd_spec.o 00:02:49.903 CXX test/cpp_headers/init.o 00:02:49.903 CXX test/cpp_headers/ioat.o 00:02:49.903 CXX test/cpp_headers/iscsi_spec.o 00:02:49.903 CXX test/cpp_headers/json.o 00:02:49.903 CXX test/cpp_headers/ioat_spec.o 00:02:49.903 CXX test/cpp_headers/jsonrpc.o 00:02:49.903 CXX test/cpp_headers/keyring_module.o 00:02:49.903 CXX test/cpp_headers/likely.o 00:02:49.903 CXX test/cpp_headers/keyring.o 00:02:49.903 CXX test/cpp_headers/log.o 00:02:49.903 CXX test/cpp_headers/lvol.o 00:02:49.903 CXX test/cpp_headers/memory.o 00:02:49.903 CXX test/cpp_headers/mmio.o 00:02:49.903 CXX test/cpp_headers/nbd.o 00:02:49.903 CXX test/cpp_headers/net.o 00:02:49.903 CXX test/cpp_headers/nvme.o 00:02:49.903 CXX test/cpp_headers/notify.o 00:02:49.903 CXX test/cpp_headers/nvme_ocssd.o 00:02:49.903 CXX test/cpp_headers/nvme_intel.o 00:02:49.903 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:49.903 CXX test/cpp_headers/nvme_spec.o 00:02:49.903 CXX test/cpp_headers/nvme_zns.o 00:02:49.903 CXX test/cpp_headers/nvmf_cmd.o 00:02:49.903 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:49.903 CXX test/cpp_headers/nvmf.o 00:02:49.903 CXX test/cpp_headers/nvmf_spec.o 00:02:49.903 CXX test/cpp_headers/nvmf_transport.o 00:02:49.903 CXX test/cpp_headers/opal_spec.o 00:02:49.903 CXX test/cpp_headers/pci_ids.o 00:02:49.903 CXX test/cpp_headers/opal.o 00:02:49.903 CXX test/cpp_headers/pipe.o 00:02:49.903 CXX test/cpp_headers/queue.o 00:02:49.903 CC examples/util/zipf/zipf.o 00:02:49.903 CC app/fio/nvme/fio_plugin.o 00:02:49.903 CXX test/cpp_headers/reduce.o 00:02:49.903 CC examples/ioat/perf/perf.o 00:02:49.903 CC test/app/histogram_perf/histogram_perf.o 00:02:49.903 CC test/env/memory/memory_ut.o 00:02:49.903 CC test/app/jsoncat/jsoncat.o 00:02:49.903 CC test/thread/poller_perf/poller_perf.o 00:02:50.163 CC test/app/stub/stub.o 00:02:50.163 CC examples/ioat/verify/verify.o 00:02:50.163 CC app/fio/bdev/fio_plugin.o 00:02:50.163 CC test/env/vtophys/vtophys.o 00:02:50.163 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:50.163 LINK spdk_lspci 00:02:50.163 CC test/env/pci/pci_ut.o 00:02:50.163 CXX test/cpp_headers/rpc.o 00:02:50.163 CC test/app/bdev_svc/bdev_svc.o 00:02:50.163 CC test/dma/test_dma/test_dma.o 00:02:50.163 LINK rpc_client_test 00:02:50.427 LINK spdk_nvme_discover 00:02:50.427 LINK zipf 00:02:50.427 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:50.427 CC test/env/mem_callbacks/mem_callbacks.o 00:02:50.427 LINK jsoncat 00:02:50.427 LINK histogram_perf 00:02:50.427 LINK poller_perf 00:02:50.427 CXX test/cpp_headers/scheduler.o 00:02:50.427 LINK nvmf_tgt 00:02:50.427 CXX test/cpp_headers/scsi.o 00:02:50.427 CXX test/cpp_headers/scsi_spec.o 00:02:50.427 CXX test/cpp_headers/stdinc.o 00:02:50.427 CXX test/cpp_headers/string.o 00:02:50.427 CXX test/cpp_headers/sock.o 00:02:50.427 LINK spdk_tgt 00:02:50.427 LINK interrupt_tgt 00:02:50.427 CXX test/cpp_headers/thread.o 00:02:50.427 LINK iscsi_tgt 00:02:50.427 CXX test/cpp_headers/trace.o 00:02:50.427 CXX test/cpp_headers/tree.o 00:02:50.427 CXX test/cpp_headers/trace_parser.o 00:02:50.427 CXX test/cpp_headers/util.o 00:02:50.427 CXX test/cpp_headers/ublk.o 00:02:50.427 CXX test/cpp_headers/uuid.o 00:02:50.427 CXX test/cpp_headers/version.o 00:02:50.427 CXX test/cpp_headers/vfio_user_pci.o 00:02:50.427 CXX test/cpp_headers/vfio_user_spec.o 00:02:50.427 CXX test/cpp_headers/vhost.o 00:02:50.427 CXX test/cpp_headers/vmd.o 00:02:50.427 CXX test/cpp_headers/xor.o 00:02:50.427 CXX test/cpp_headers/zipf.o 00:02:50.685 LINK spdk_trace_record 00:02:50.685 LINK vtophys 00:02:50.685 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:50.685 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:50.685 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:50.685 LINK spdk_trace 00:02:50.685 LINK env_dpdk_post_init 00:02:50.685 LINK stub 00:02:50.685 LINK verify 00:02:50.685 LINK ioat_perf 00:02:50.685 LINK bdev_svc 00:02:50.685 LINK spdk_dd 00:02:50.943 LINK test_dma 00:02:50.943 LINK pci_ut 00:02:50.943 LINK spdk_bdev 00:02:50.943 CC test/event/reactor_perf/reactor_perf.o 00:02:50.943 CC examples/vmd/lsvmd/lsvmd.o 00:02:50.943 CC examples/idxd/perf/perf.o 00:02:50.943 CC examples/vmd/led/led.o 00:02:50.943 CC test/event/event_perf/event_perf.o 00:02:50.943 CC examples/sock/hello_world/hello_sock.o 00:02:50.943 CC test/event/reactor/reactor.o 00:02:50.943 CC test/event/scheduler/scheduler.o 00:02:50.943 CC test/event/app_repeat/app_repeat.o 00:02:50.943 LINK spdk_nvme 00:02:50.943 CC examples/thread/thread/thread_ex.o 00:02:50.943 LINK nvme_fuzz 00:02:50.943 LINK spdk_top 00:02:50.943 LINK spdk_nvme_identify 00:02:50.943 CC app/vhost/vhost.o 00:02:50.943 LINK spdk_nvme_perf 00:02:51.201 LINK vhost_fuzz 00:02:51.201 LINK reactor_perf 00:02:51.201 LINK lsvmd 00:02:51.201 LINK led 00:02:51.201 LINK event_perf 00:02:51.201 LINK reactor 00:02:51.201 LINK app_repeat 00:02:51.201 LINK mem_callbacks 00:02:51.201 LINK hello_sock 00:02:51.201 LINK scheduler 00:02:51.201 LINK vhost 00:02:51.201 LINK idxd_perf 00:02:51.201 CC test/nvme/startup/startup.o 00:02:51.201 CC test/nvme/sgl/sgl.o 00:02:51.201 LINK thread 00:02:51.201 CC test/nvme/fused_ordering/fused_ordering.o 00:02:51.201 CC test/nvme/cuse/cuse.o 00:02:51.201 CC test/nvme/overhead/overhead.o 00:02:51.201 CC test/nvme/simple_copy/simple_copy.o 00:02:51.201 CC test/nvme/connect_stress/connect_stress.o 00:02:51.201 CC test/nvme/e2edp/nvme_dp.o 00:02:51.201 CC test/nvme/fdp/fdp.o 00:02:51.201 CC test/nvme/reserve/reserve.o 00:02:51.201 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:51.201 CC test/nvme/reset/reset.o 00:02:51.201 CC test/nvme/aer/aer.o 00:02:51.201 CC test/nvme/compliance/nvme_compliance.o 00:02:51.201 CC test/nvme/boot_partition/boot_partition.o 00:02:51.201 CC test/nvme/err_injection/err_injection.o 00:02:51.201 CC test/accel/dif/dif.o 00:02:51.201 CC test/blobfs/mkfs/mkfs.o 00:02:51.459 CC test/lvol/esnap/esnap.o 00:02:51.459 LINK memory_ut 00:02:51.459 LINK startup 00:02:51.459 LINK boot_partition 00:02:51.460 LINK fused_ordering 00:02:51.460 LINK connect_stress 00:02:51.460 LINK doorbell_aers 00:02:51.460 LINK simple_copy 00:02:51.460 LINK reserve 00:02:51.460 LINK err_injection 00:02:51.460 LINK sgl 00:02:51.460 LINK reset 00:02:51.460 LINK nvme_dp 00:02:51.460 LINK overhead 00:02:51.460 LINK mkfs 00:02:51.460 LINK aer 00:02:51.460 LINK nvme_compliance 00:02:51.718 LINK fdp 00:02:51.718 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:51.718 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:51.718 CC examples/nvme/hotplug/hotplug.o 00:02:51.718 CC examples/nvme/hello_world/hello_world.o 00:02:51.718 CC examples/nvme/abort/abort.o 00:02:51.718 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:51.718 CC examples/nvme/reconnect/reconnect.o 00:02:51.718 CC examples/nvme/arbitration/arbitration.o 00:02:51.718 LINK dif 00:02:51.718 CC examples/accel/perf/accel_perf.o 00:02:51.718 CC examples/blob/cli/blobcli.o 00:02:51.718 CC examples/blob/hello_world/hello_blob.o 00:02:51.718 LINK pmr_persistence 00:02:51.718 LINK cmb_copy 00:02:51.975 LINK hotplug 00:02:51.975 LINK hello_world 00:02:51.975 LINK reconnect 00:02:51.975 LINK arbitration 00:02:51.975 LINK abort 00:02:51.975 LINK hello_blob 00:02:51.975 LINK nvme_manage 00:02:51.975 LINK iscsi_fuzz 00:02:52.234 LINK accel_perf 00:02:52.234 CC test/bdev/bdevio/bdevio.o 00:02:52.234 LINK blobcli 00:02:52.234 LINK cuse 00:02:52.492 LINK bdevio 00:02:52.492 CC examples/bdev/bdevperf/bdevperf.o 00:02:52.492 CC examples/bdev/hello_world/hello_bdev.o 00:02:52.750 LINK hello_bdev 00:02:53.008 LINK bdevperf 00:02:53.573 CC examples/nvmf/nvmf/nvmf.o 00:02:53.831 LINK nvmf 00:02:54.766 LINK esnap 00:02:55.025 00:02:55.025 real 0m43.994s 00:02:55.025 user 6m47.032s 00:02:55.025 sys 3m31.246s 00:02:55.025 17:57:48 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:55.025 17:57:48 make -- common/autotest_common.sh@10 -- $ set +x 00:02:55.025 ************************************ 00:02:55.025 END TEST make 00:02:55.025 ************************************ 00:02:55.025 17:57:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:55.025 17:57:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:55.025 17:57:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:55.025 17:57:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.025 17:57:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:55.025 17:57:48 -- pm/common@44 -- $ pid=3121222 00:02:55.025 17:57:48 -- pm/common@50 -- $ kill -TERM 3121222 00:02:55.025 17:57:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.025 17:57:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:55.025 17:57:48 -- pm/common@44 -- $ pid=3121224 00:02:55.025 17:57:48 -- pm/common@50 -- $ kill -TERM 3121224 00:02:55.025 17:57:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.025 17:57:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:55.025 17:57:48 -- pm/common@44 -- $ pid=3121226 00:02:55.025 17:57:48 -- pm/common@50 -- $ kill -TERM 3121226 00:02:55.025 17:57:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.025 17:57:48 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:55.025 17:57:48 -- pm/common@44 -- $ pid=3121251 00:02:55.025 17:57:48 -- pm/common@50 -- $ sudo -E kill -TERM 3121251 00:02:55.283 17:57:48 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:55.283 17:57:48 -- nvmf/common.sh@7 -- # uname -s 00:02:55.283 17:57:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.283 17:57:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.283 17:57:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.283 17:57:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.283 17:57:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:55.283 17:57:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:55.283 17:57:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.283 17:57:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:55.283 17:57:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.283 17:57:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:55.283 17:57:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:02:55.283 17:57:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:02:55.283 17:57:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.283 17:57:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:55.283 17:57:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:55.284 17:57:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:55.284 17:57:48 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:55.284 17:57:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.284 17:57:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.284 17:57:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.284 17:57:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.284 17:57:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.284 17:57:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.284 17:57:48 -- paths/export.sh@5 -- # export PATH 00:02:55.284 17:57:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.284 17:57:48 -- nvmf/common.sh@47 -- # : 0 00:02:55.284 17:57:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:55.284 17:57:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:55.284 17:57:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:55.284 17:57:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.284 17:57:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.284 17:57:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:55.284 17:57:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:55.284 17:57:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:55.284 17:57:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.284 17:57:48 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.284 17:57:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.284 17:57:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.284 17:57:48 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.284 17:57:48 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.284 17:57:48 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.284 17:57:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.284 17:57:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.284 17:57:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.284 17:57:48 -- spdk/autotest.sh@48 -- # udevadm_pid=3180521 00:02:55.284 17:57:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:55.284 17:57:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.284 17:57:48 -- pm/common@17 -- # local monitor 00:02:55.284 17:57:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.284 17:57:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.284 17:57:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.284 17:57:48 -- pm/common@21 -- # date +%s 00:02:55.284 17:57:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.284 17:57:48 -- pm/common@21 -- # date +%s 00:02:55.284 17:57:48 -- pm/common@25 -- # sleep 1 00:02:55.284 17:57:48 -- pm/common@21 -- # date +%s 00:02:55.284 17:57:48 -- pm/common@21 -- # date +%s 00:02:55.284 17:57:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721836668 00:02:55.284 17:57:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721836668 00:02:55.284 17:57:48 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721836668 00:02:55.284 17:57:48 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721836668 00:02:55.284 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721836668_collect-vmstat.pm.log 00:02:55.284 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721836668_collect-cpu-temp.pm.log 00:02:55.284 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721836668_collect-cpu-load.pm.log 00:02:55.284 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721836668_collect-bmc-pm.bmc.pm.log 00:02:56.221 17:57:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:56.221 17:57:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:56.221 17:57:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:56.221 17:57:49 -- common/autotest_common.sh@10 -- # set +x 00:02:56.221 17:57:49 -- spdk/autotest.sh@59 -- # create_test_list 00:02:56.221 17:57:49 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:56.221 17:57:49 -- common/autotest_common.sh@10 -- # set +x 00:02:56.221 17:57:49 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:56.221 17:57:49 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.221 17:57:49 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.221 17:57:49 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:56.221 17:57:49 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.221 17:57:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:56.221 17:57:49 -- common/autotest_common.sh@1455 -- # uname 00:02:56.221 17:57:49 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:56.221 17:57:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:56.221 17:57:49 -- common/autotest_common.sh@1475 -- # uname 00:02:56.221 17:57:49 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:56.221 17:57:49 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:56.221 17:57:49 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:56.221 17:57:49 -- spdk/autotest.sh@72 -- # hash lcov 00:02:56.221 17:57:49 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:56.221 17:57:49 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:56.221 --rc lcov_branch_coverage=1 00:02:56.221 --rc lcov_function_coverage=1 00:02:56.221 --rc genhtml_branch_coverage=1 00:02:56.221 --rc genhtml_function_coverage=1 00:02:56.221 --rc genhtml_legend=1 00:02:56.221 --rc geninfo_all_blocks=1 00:02:56.221 ' 00:02:56.221 17:57:49 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:56.221 --rc lcov_branch_coverage=1 00:02:56.221 --rc lcov_function_coverage=1 00:02:56.221 --rc genhtml_branch_coverage=1 00:02:56.221 --rc genhtml_function_coverage=1 00:02:56.221 --rc genhtml_legend=1 00:02:56.221 --rc geninfo_all_blocks=1 00:02:56.221 ' 00:02:56.221 17:57:49 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:56.221 --rc lcov_branch_coverage=1 00:02:56.221 --rc lcov_function_coverage=1 00:02:56.221 --rc genhtml_branch_coverage=1 00:02:56.221 --rc genhtml_function_coverage=1 00:02:56.221 --rc genhtml_legend=1 00:02:56.221 --rc geninfo_all_blocks=1 00:02:56.221 --no-external' 00:02:56.221 17:57:49 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:56.221 --rc lcov_branch_coverage=1 00:02:56.221 --rc lcov_function_coverage=1 00:02:56.221 --rc genhtml_branch_coverage=1 00:02:56.221 --rc genhtml_function_coverage=1 00:02:56.221 --rc genhtml_legend=1 00:02:56.221 --rc geninfo_all_blocks=1 00:02:56.221 --no-external' 00:02:56.221 17:57:49 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:56.479 lcov: LCOV version 1.14 00:02:56.479 17:57:49 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:06.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:06.449 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:14.555 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:14.555 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:14.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:14.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:14.814 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:14.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:14.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:14.815 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:18.096 17:58:10 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:18.096 17:58:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:18.096 17:58:10 -- common/autotest_common.sh@10 -- # set +x 00:03:18.096 17:58:10 -- spdk/autotest.sh@91 -- # rm -f 00:03:18.096 17:58:10 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.995 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:03:19.995 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:20.345 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:20.602 17:58:13 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:20.602 17:58:13 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:20.602 17:58:13 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:20.602 17:58:13 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:20.602 17:58:13 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:20.602 17:58:13 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:20.602 17:58:13 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:20.602 17:58:13 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:20.602 17:58:13 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:20.602 17:58:13 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:20.602 17:58:13 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:20.602 17:58:13 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:20.602 17:58:13 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:20.602 17:58:13 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:20.602 17:58:13 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:20.602 No valid GPT data, bailing 00:03:20.602 17:58:13 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:20.602 17:58:13 -- scripts/common.sh@391 -- # pt= 00:03:20.602 17:58:13 -- scripts/common.sh@392 -- # return 1 00:03:20.602 17:58:13 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:20.602 1+0 records in 00:03:20.602 1+0 records out 00:03:20.602 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489359 s, 214 MB/s 00:03:20.602 17:58:13 -- spdk/autotest.sh@118 -- # sync 00:03:20.603 17:58:13 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:20.603 17:58:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:20.603 17:58:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:25.918 17:58:18 -- spdk/autotest.sh@124 -- # uname -s 00:03:25.918 17:58:18 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:25.918 17:58:18 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.919 17:58:18 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.919 17:58:18 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.919 17:58:18 -- common/autotest_common.sh@10 -- # set +x 00:03:25.919 ************************************ 00:03:25.919 START TEST setup.sh 00:03:25.919 ************************************ 00:03:25.919 17:58:18 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:25.919 * Looking for test storage... 00:03:25.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.919 17:58:18 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:25.919 17:58:18 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:25.919 17:58:18 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.919 17:58:18 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.919 17:58:18 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.919 17:58:18 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:25.919 ************************************ 00:03:25.919 START TEST acl 00:03:25.919 ************************************ 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:25.919 * Looking for test storage... 00:03:25.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:25.919 17:58:18 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:25.919 17:58:18 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:25.919 17:58:18 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:25.919 17:58:18 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:25.919 17:58:18 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:25.919 17:58:18 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:25.919 17:58:18 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:25.919 17:58:18 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.919 17:58:18 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:28.451 17:58:21 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:28.451 17:58:21 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:28.451 17:58:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:28.451 17:58:21 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:28.451 17:58:21 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:28.451 17:58:21 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:31.740 Hugepages 00:03:31.740 node hugesize free / total 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 00:03:31.740 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5f:00.0 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:31.740 17:58:24 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:31.740 17:58:24 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:31.740 17:58:24 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:31.740 17:58:24 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.740 ************************************ 00:03:31.740 START TEST denied 00:03:31.740 ************************************ 00:03:31.740 17:58:24 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:31.740 17:58:24 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5f:00.0' 00:03:31.740 17:58:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5f:00.0' 00:03:31.740 17:58:24 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:31.740 17:58:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.740 17:58:24 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:34.267 0000:5f:00.0 (8086 0a54): Skipping denied controller at 0000:5f:00.0 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5f:00.0 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5f:00.0 ]] 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5f:00.0/driver 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:34.267 17:58:27 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.472 00:03:38.472 real 0m6.556s 00:03:38.472 user 0m2.115s 00:03:38.472 sys 0m3.749s 00:03:38.472 17:58:31 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.472 17:58:31 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:38.472 ************************************ 00:03:38.472 END TEST denied 00:03:38.472 ************************************ 00:03:38.472 17:58:31 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:38.472 17:58:31 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.472 17:58:31 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.472 17:58:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:38.472 ************************************ 00:03:38.472 START TEST allowed 00:03:38.472 ************************************ 00:03:38.472 17:58:31 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:38.472 17:58:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5f:00.0 00:03:38.472 17:58:31 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:38.472 17:58:31 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.472 17:58:31 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5f:00.0 .*: nvme -> .*' 00:03:38.472 17:58:31 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.662 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:42.662 17:58:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:42.662 17:58:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:42.662 17:58:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:42.662 17:58:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.662 17:58:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:45.194 00:03:45.194 real 0m7.151s 00:03:45.194 user 0m1.959s 00:03:45.194 sys 0m3.697s 00:03:45.194 17:58:38 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.194 17:58:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:45.194 ************************************ 00:03:45.194 END TEST allowed 00:03:45.194 ************************************ 00:03:45.452 00:03:45.452 real 0m19.932s 00:03:45.452 user 0m6.509s 00:03:45.452 sys 0m11.449s 00:03:45.452 17:58:38 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:45.452 17:58:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:45.452 ************************************ 00:03:45.452 END TEST acl 00:03:45.452 ************************************ 00:03:45.452 17:58:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.452 17:58:38 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.452 17:58:38 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.452 17:58:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:45.452 ************************************ 00:03:45.452 START TEST hugepages 00:03:45.452 ************************************ 00:03:45.452 17:58:38 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:45.452 * Looking for test storage... 00:03:45.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 168201252 kB' 'MemAvailable: 171511284 kB' 'Buffers: 4132 kB' 'Cached: 14865028 kB' 'SwapCached: 0 kB' 'Active: 11695696 kB' 'Inactive: 3710384 kB' 'Active(anon): 11216688 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 540216 kB' 'Mapped: 207504 kB' 'Shmem: 10679768 kB' 'KReclaimable: 530180 kB' 'Slab: 1166492 kB' 'SReclaimable: 530180 kB' 'SUnreclaim: 636312 kB' 'KernelStack: 20464 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982032 kB' 'Committed_AS: 12647184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.452 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:45.453 17:58:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:45.453 17:58:38 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:45.453 17:58:38 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:45.453 17:58:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.710 ************************************ 00:03:45.710 START TEST default_setup 00:03:45.710 ************************************ 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.710 17:58:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.241 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:48.241 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:48.500 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:48.500 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:49.884 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170344648 kB' 'MemAvailable: 173654616 kB' 'Buffers: 4132 kB' 'Cached: 14865144 kB' 'SwapCached: 0 kB' 'Active: 11716316 kB' 'Inactive: 3710384 kB' 'Active(anon): 11237308 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560256 kB' 'Mapped: 207620 kB' 'Shmem: 10679884 kB' 'KReclaimable: 530052 kB' 'Slab: 1165460 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635408 kB' 'KernelStack: 20832 kB' 'PageTables: 10112 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12670448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316080 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.884 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.885 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170344416 kB' 'MemAvailable: 173654384 kB' 'Buffers: 4132 kB' 'Cached: 14865148 kB' 'SwapCached: 0 kB' 'Active: 11715760 kB' 'Inactive: 3710384 kB' 'Active(anon): 11236752 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559664 kB' 'Mapped: 207564 kB' 'Shmem: 10679888 kB' 'KReclaimable: 530052 kB' 'Slab: 1165460 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635408 kB' 'KernelStack: 20576 kB' 'PageTables: 8948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12668976 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316112 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.886 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.887 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170341368 kB' 'MemAvailable: 173651336 kB' 'Buffers: 4132 kB' 'Cached: 14865164 kB' 'SwapCached: 0 kB' 'Active: 11714864 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235856 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559648 kB' 'Mapped: 207488 kB' 'Shmem: 10679904 kB' 'KReclaimable: 530052 kB' 'Slab: 1165476 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635424 kB' 'KernelStack: 20688 kB' 'PageTables: 9564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12670488 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316112 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.888 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.889 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:49.890 nr_hugepages=1024 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.890 resv_hugepages=0 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.890 surplus_hugepages=0 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.890 anon_hugepages=0 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170340916 kB' 'MemAvailable: 173650884 kB' 'Buffers: 4132 kB' 'Cached: 14865188 kB' 'SwapCached: 0 kB' 'Active: 11714756 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235748 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559036 kB' 'Mapped: 207488 kB' 'Shmem: 10679928 kB' 'KReclaimable: 530052 kB' 'Slab: 1165152 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635100 kB' 'KernelStack: 20624 kB' 'PageTables: 9500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12670512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316128 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.890 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.891 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86684732 kB' 'MemUsed: 10977952 kB' 'SwapCached: 0 kB' 'Active: 7208000 kB' 'Inactive: 251300 kB' 'Active(anon): 7010948 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 251300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013884 kB' 'Mapped: 116528 kB' 'AnonPages: 448524 kB' 'Shmem: 6565532 kB' 'KernelStack: 11832 kB' 'PageTables: 6160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 368388 kB' 'Slab: 675820 kB' 'SReclaimable: 368388 kB' 'SUnreclaim: 307432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.892 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:49.893 node0=1024 expecting 1024 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:49.893 00:03:49.893 real 0m4.401s 00:03:49.893 user 0m1.260s 00:03:49.893 sys 0m1.869s 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:49.893 17:58:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:49.893 ************************************ 00:03:49.893 END TEST default_setup 00:03:49.893 ************************************ 00:03:50.152 17:58:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:50.152 17:58:42 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:50.152 17:58:42 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:50.152 17:58:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.152 ************************************ 00:03:50.152 START TEST per_node_1G_alloc 00:03:50.152 ************************************ 00:03:50.152 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:50.152 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:50.152 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:50.152 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.152 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.153 17:58:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.713 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.713 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.713 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.713 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170346960 kB' 'MemAvailable: 173656928 kB' 'Buffers: 4132 kB' 'Cached: 14865284 kB' 'SwapCached: 0 kB' 'Active: 11716540 kB' 'Inactive: 3710384 kB' 'Active(anon): 11237532 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560612 kB' 'Mapped: 207544 kB' 'Shmem: 10680024 kB' 'KReclaimable: 530052 kB' 'Slab: 1165036 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634984 kB' 'KernelStack: 20896 kB' 'PageTables: 9976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12669644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316208 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.714 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170346592 kB' 'MemAvailable: 173656560 kB' 'Buffers: 4132 kB' 'Cached: 14865288 kB' 'SwapCached: 0 kB' 'Active: 11715784 kB' 'Inactive: 3710384 kB' 'Active(anon): 11236776 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560012 kB' 'Mapped: 207504 kB' 'Shmem: 10680028 kB' 'KReclaimable: 530052 kB' 'Slab: 1165048 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634996 kB' 'KernelStack: 20784 kB' 'PageTables: 10116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12671140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316128 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.715 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.716 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.717 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170352624 kB' 'MemAvailable: 173662592 kB' 'Buffers: 4132 kB' 'Cached: 14865304 kB' 'SwapCached: 0 kB' 'Active: 11716448 kB' 'Inactive: 3710384 kB' 'Active(anon): 11237440 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560640 kB' 'Mapped: 207504 kB' 'Shmem: 10680044 kB' 'KReclaimable: 530052 kB' 'Slab: 1164816 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634764 kB' 'KernelStack: 20928 kB' 'PageTables: 10440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12671180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316112 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.983 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.984 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.985 nr_hugepages=1024 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.985 resv_hugepages=0 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.985 surplus_hugepages=0 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.985 anon_hugepages=0 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170351700 kB' 'MemAvailable: 173661668 kB' 'Buffers: 4132 kB' 'Cached: 14865328 kB' 'SwapCached: 0 kB' 'Active: 11717848 kB' 'Inactive: 3710384 kB' 'Active(anon): 11238840 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562052 kB' 'Mapped: 207972 kB' 'Shmem: 10680068 kB' 'KReclaimable: 530052 kB' 'Slab: 1164976 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634924 kB' 'KernelStack: 20464 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12672184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316000 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.985 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.986 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.987 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87753352 kB' 'MemUsed: 9909332 kB' 'SwapCached: 0 kB' 'Active: 7206272 kB' 'Inactive: 251300 kB' 'Active(anon): 7009220 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 251300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013884 kB' 'Mapped: 117436 kB' 'AnonPages: 446820 kB' 'Shmem: 6565532 kB' 'KernelStack: 11752 kB' 'PageTables: 6156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 368388 kB' 'Slab: 675700 kB' 'SReclaimable: 368388 kB' 'SUnreclaim: 307312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.988 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 82605804 kB' 'MemUsed: 11112672 kB' 'SwapCached: 0 kB' 'Active: 4506972 kB' 'Inactive: 3459084 kB' 'Active(anon): 4225016 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3459084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855620 kB' 'Mapped: 90472 kB' 'AnonPages: 110580 kB' 'Shmem: 4114580 kB' 'KernelStack: 8744 kB' 'PageTables: 2968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161664 kB' 'Slab: 489276 kB' 'SReclaimable: 161664 kB' 'SUnreclaim: 327612 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.989 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.990 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.991 node0=512 expecting 512 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.991 node1=512 expecting 512 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.991 00:03:52.991 real 0m2.901s 00:03:52.991 user 0m1.224s 00:03:52.991 sys 0m1.741s 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:52.991 17:58:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.991 ************************************ 00:03:52.991 END TEST per_node_1G_alloc 00:03:52.991 ************************************ 00:03:52.991 17:58:45 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:52.991 17:58:45 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:52.991 17:58:45 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:52.991 17:58:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.991 ************************************ 00:03:52.991 START TEST even_2G_alloc 00:03:52.991 ************************************ 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.991 17:58:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:55.525 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.525 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:55.525 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:55.789 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:55.789 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:55.789 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:55.789 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:55.789 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:55.789 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170348536 kB' 'MemAvailable: 173658504 kB' 'Buffers: 4132 kB' 'Cached: 14865444 kB' 'SwapCached: 0 kB' 'Active: 11712748 kB' 'Inactive: 3710384 kB' 'Active(anon): 11233740 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556280 kB' 'Mapped: 206676 kB' 'Shmem: 10680184 kB' 'KReclaimable: 530052 kB' 'Slab: 1164920 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634868 kB' 'KernelStack: 20448 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12655360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315984 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.789 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.790 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170348868 kB' 'MemAvailable: 173658836 kB' 'Buffers: 4132 kB' 'Cached: 14865448 kB' 'SwapCached: 0 kB' 'Active: 11712952 kB' 'Inactive: 3710384 kB' 'Active(anon): 11233944 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556572 kB' 'Mapped: 206612 kB' 'Shmem: 10680188 kB' 'KReclaimable: 530052 kB' 'Slab: 1164912 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634860 kB' 'KernelStack: 20432 kB' 'PageTables: 8668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12655376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.791 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.792 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170349444 kB' 'MemAvailable: 173659412 kB' 'Buffers: 4132 kB' 'Cached: 14865448 kB' 'SwapCached: 0 kB' 'Active: 11712516 kB' 'Inactive: 3710384 kB' 'Active(anon): 11233508 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556664 kB' 'Mapped: 206536 kB' 'Shmem: 10680188 kB' 'KReclaimable: 530052 kB' 'Slab: 1164908 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634856 kB' 'KernelStack: 20432 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12655396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.793 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.794 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:55.795 nr_hugepages=1024 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.795 resv_hugepages=0 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.795 surplus_hugepages=0 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.795 anon_hugepages=0 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170350728 kB' 'MemAvailable: 173660696 kB' 'Buffers: 4132 kB' 'Cached: 14865488 kB' 'SwapCached: 0 kB' 'Active: 11712532 kB' 'Inactive: 3710384 kB' 'Active(anon): 11233524 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 556540 kB' 'Mapped: 206536 kB' 'Shmem: 10680228 kB' 'KReclaimable: 530052 kB' 'Slab: 1164908 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634856 kB' 'KernelStack: 20432 kB' 'PageTables: 8660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12655420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.795 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.796 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87760532 kB' 'MemUsed: 9902152 kB' 'SwapCached: 0 kB' 'Active: 7206916 kB' 'Inactive: 251300 kB' 'Active(anon): 7009864 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 251300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013916 kB' 'Mapped: 116056 kB' 'AnonPages: 447476 kB' 'Shmem: 6565564 kB' 'KernelStack: 11704 kB' 'PageTables: 5768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 368388 kB' 'Slab: 675796 kB' 'SReclaimable: 368388 kB' 'SUnreclaim: 307408 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.797 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.798 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 82589948 kB' 'MemUsed: 11128528 kB' 'SwapCached: 0 kB' 'Active: 4505632 kB' 'Inactive: 3459084 kB' 'Active(anon): 4223676 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3459084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855744 kB' 'Mapped: 90480 kB' 'AnonPages: 109068 kB' 'Shmem: 4114704 kB' 'KernelStack: 8728 kB' 'PageTables: 2892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161664 kB' 'Slab: 489112 kB' 'SReclaimable: 161664 kB' 'SUnreclaim: 327448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.059 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:56.060 node0=512 expecting 512 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:56.060 node1=512 expecting 512 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:56.060 00:03:56.060 real 0m2.928s 00:03:56.060 user 0m1.183s 00:03:56.060 sys 0m1.797s 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:56.060 17:58:48 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:56.061 ************************************ 00:03:56.061 END TEST even_2G_alloc 00:03:56.061 ************************************ 00:03:56.061 17:58:48 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:56.061 17:58:48 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:56.061 17:58:48 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:56.061 17:58:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:56.061 ************************************ 00:03:56.061 START TEST odd_alloc 00:03:56.061 ************************************ 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:56.061 17:58:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:58.599 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:58.599 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:58.599 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170371988 kB' 'MemAvailable: 173681956 kB' 'Buffers: 4132 kB' 'Cached: 14865592 kB' 'SwapCached: 0 kB' 'Active: 11714080 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235072 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557976 kB' 'Mapped: 206616 kB' 'Shmem: 10680332 kB' 'KReclaimable: 530052 kB' 'Slab: 1164912 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634860 kB' 'KernelStack: 20464 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 12655892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.599 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.600 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170372808 kB' 'MemAvailable: 173682776 kB' 'Buffers: 4132 kB' 'Cached: 14865596 kB' 'SwapCached: 0 kB' 'Active: 11713628 kB' 'Inactive: 3710384 kB' 'Active(anon): 11234620 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557596 kB' 'Mapped: 206552 kB' 'Shmem: 10680336 kB' 'KReclaimable: 530052 kB' 'Slab: 1164912 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634860 kB' 'KernelStack: 20432 kB' 'PageTables: 8644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 12655912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315936 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.601 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.602 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170373836 kB' 'MemAvailable: 173683804 kB' 'Buffers: 4132 kB' 'Cached: 14865612 kB' 'SwapCached: 0 kB' 'Active: 11713636 kB' 'Inactive: 3710384 kB' 'Active(anon): 11234628 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557572 kB' 'Mapped: 206552 kB' 'Shmem: 10680352 kB' 'KReclaimable: 530052 kB' 'Slab: 1164924 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634872 kB' 'KernelStack: 20432 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 12655932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315936 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.603 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.604 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:58.605 nr_hugepages=1025 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:58.605 resv_hugepages=0 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:58.605 surplus_hugepages=0 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:58.605 anon_hugepages=0 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170373836 kB' 'MemAvailable: 173683804 kB' 'Buffers: 4132 kB' 'Cached: 14865612 kB' 'SwapCached: 0 kB' 'Active: 11713672 kB' 'Inactive: 3710384 kB' 'Active(anon): 11234664 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557608 kB' 'Mapped: 206552 kB' 'Shmem: 10680352 kB' 'KReclaimable: 530052 kB' 'Slab: 1164924 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 634872 kB' 'KernelStack: 20448 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029584 kB' 'Committed_AS: 12655952 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.605 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.606 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87776464 kB' 'MemUsed: 9886220 kB' 'SwapCached: 0 kB' 'Active: 7208100 kB' 'Inactive: 251300 kB' 'Active(anon): 7011048 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 251300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013940 kB' 'Mapped: 116064 kB' 'AnonPages: 448624 kB' 'Shmem: 6565588 kB' 'KernelStack: 11720 kB' 'PageTables: 5808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 368388 kB' 'Slab: 675460 kB' 'SReclaimable: 368388 kB' 'SUnreclaim: 307072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.607 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.608 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.868 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.868 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.868 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.868 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 82597648 kB' 'MemUsed: 11120828 kB' 'SwapCached: 0 kB' 'Active: 4505948 kB' 'Inactive: 3459084 kB' 'Active(anon): 4223992 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3459084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855864 kB' 'Mapped: 90488 kB' 'AnonPages: 109288 kB' 'Shmem: 4114824 kB' 'KernelStack: 8728 kB' 'PageTables: 2900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161664 kB' 'Slab: 489464 kB' 'SReclaimable: 161664 kB' 'SUnreclaim: 327800 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.869 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:58.870 node0=512 expecting 513 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:58.870 node1=513 expecting 512 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:58.870 00:03:58.870 real 0m2.741s 00:03:58.870 user 0m1.047s 00:03:58.870 sys 0m1.716s 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:58.870 17:58:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:58.870 ************************************ 00:03:58.870 END TEST odd_alloc 00:03:58.870 ************************************ 00:03:58.870 17:58:51 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:58.870 17:58:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:58.870 17:58:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:58.870 17:58:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:58.870 ************************************ 00:03:58.870 START TEST custom_alloc 00:03:58.870 ************************************ 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:58.870 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.871 17:58:51 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.412 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:01.412 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:01.412 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 169324896 kB' 'MemAvailable: 172634864 kB' 'Buffers: 4132 kB' 'Cached: 14865736 kB' 'SwapCached: 0 kB' 'Active: 11714968 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235960 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558304 kB' 'Mapped: 206632 kB' 'Shmem: 10680476 kB' 'KReclaimable: 530052 kB' 'Slab: 1165804 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635752 kB' 'KernelStack: 20464 kB' 'PageTables: 8760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 12655724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316032 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.412 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.413 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 169326632 kB' 'MemAvailable: 172636600 kB' 'Buffers: 4132 kB' 'Cached: 14865752 kB' 'SwapCached: 0 kB' 'Active: 11714052 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235044 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557804 kB' 'Mapped: 206564 kB' 'Shmem: 10680492 kB' 'KReclaimable: 530052 kB' 'Slab: 1165856 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635804 kB' 'KernelStack: 20432 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 12656244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315984 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.414 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.415 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 169327808 kB' 'MemAvailable: 172637776 kB' 'Buffers: 4132 kB' 'Cached: 14865764 kB' 'SwapCached: 0 kB' 'Active: 11714056 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235048 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557788 kB' 'Mapped: 206564 kB' 'Shmem: 10680504 kB' 'KReclaimable: 530052 kB' 'Slab: 1165804 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635752 kB' 'KernelStack: 20416 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 12656264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315984 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.416 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.417 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:01.418 nr_hugepages=1536 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:01.418 resv_hugepages=0 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:01.418 surplus_hugepages=0 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:01.418 anon_hugepages=0 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 169327808 kB' 'MemAvailable: 172637776 kB' 'Buffers: 4132 kB' 'Cached: 14865788 kB' 'SwapCached: 0 kB' 'Active: 11713804 kB' 'Inactive: 3710384 kB' 'Active(anon): 11234796 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557512 kB' 'Mapped: 206564 kB' 'Shmem: 10680528 kB' 'KReclaimable: 530052 kB' 'Slab: 1165804 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635752 kB' 'KernelStack: 20416 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506320 kB' 'Committed_AS: 12656284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315984 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.418 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.419 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 87793504 kB' 'MemUsed: 9869180 kB' 'SwapCached: 0 kB' 'Active: 7207168 kB' 'Inactive: 251300 kB' 'Active(anon): 7010116 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 251300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013940 kB' 'Mapped: 116568 kB' 'AnonPages: 447608 kB' 'Shmem: 6565588 kB' 'KernelStack: 11704 kB' 'PageTables: 5776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 368388 kB' 'Slab: 675704 kB' 'SReclaimable: 368388 kB' 'SUnreclaim: 307316 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.420 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:01.421 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.422 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718476 kB' 'MemFree: 81534128 kB' 'MemUsed: 12184348 kB' 'SwapCached: 0 kB' 'Active: 4512340 kB' 'Inactive: 3459084 kB' 'Active(anon): 4230384 kB' 'Inactive(anon): 0 kB' 'Active(file): 281956 kB' 'Inactive(file): 3459084 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7856024 kB' 'Mapped: 90500 kB' 'AnonPages: 116024 kB' 'Shmem: 4114984 kB' 'KernelStack: 8744 kB' 'PageTables: 2900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 161664 kB' 'Slab: 490068 kB' 'SReclaimable: 161664 kB' 'SUnreclaim: 328404 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.682 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:01.683 node0=512 expecting 512 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:01.683 node1=1024 expecting 1024 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:01.683 00:04:01.683 real 0m2.724s 00:04:01.683 user 0m1.057s 00:04:01.683 sys 0m1.685s 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.683 17:58:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:01.683 ************************************ 00:04:01.683 END TEST custom_alloc 00:04:01.683 ************************************ 00:04:01.683 17:58:54 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:01.683 17:58:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.683 17:58:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.683 17:58:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.683 ************************************ 00:04:01.683 START TEST no_shrink_alloc 00:04:01.683 ************************************ 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.683 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.684 17:58:54 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:04.216 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:04.216 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:04.216 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:04.481 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:04.481 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:04.481 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:04.481 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:04.481 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170350504 kB' 'MemAvailable: 173660472 kB' 'Buffers: 4132 kB' 'Cached: 14865888 kB' 'SwapCached: 0 kB' 'Active: 11715412 kB' 'Inactive: 3710384 kB' 'Active(anon): 11236404 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559116 kB' 'Mapped: 206652 kB' 'Shmem: 10680628 kB' 'KReclaimable: 530052 kB' 'Slab: 1165752 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635700 kB' 'KernelStack: 20496 kB' 'PageTables: 8848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12656784 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316016 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.481 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170359324 kB' 'MemAvailable: 173669292 kB' 'Buffers: 4132 kB' 'Cached: 14865888 kB' 'SwapCached: 0 kB' 'Active: 11715056 kB' 'Inactive: 3710384 kB' 'Active(anon): 11236048 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558736 kB' 'Mapped: 206588 kB' 'Shmem: 10680628 kB' 'KReclaimable: 530052 kB' 'Slab: 1165744 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635692 kB' 'KernelStack: 20464 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12656800 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315984 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.482 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.483 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170358768 kB' 'MemAvailable: 173668736 kB' 'Buffers: 4132 kB' 'Cached: 14865888 kB' 'SwapCached: 0 kB' 'Active: 11715000 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235992 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558676 kB' 'Mapped: 206588 kB' 'Shmem: 10680628 kB' 'KReclaimable: 530052 kB' 'Slab: 1165820 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635768 kB' 'KernelStack: 20464 kB' 'PageTables: 8764 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12659824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315984 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.484 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.485 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:04.486 nr_hugepages=1024 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:04.486 resv_hugepages=0 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:04.486 surplus_hugepages=0 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:04.486 anon_hugepages=0 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.486 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170359456 kB' 'MemAvailable: 173669424 kB' 'Buffers: 4132 kB' 'Cached: 14865892 kB' 'SwapCached: 0 kB' 'Active: 11714940 kB' 'Inactive: 3710384 kB' 'Active(anon): 11235932 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558616 kB' 'Mapped: 206588 kB' 'Shmem: 10680632 kB' 'KReclaimable: 530052 kB' 'Slab: 1165808 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 635756 kB' 'KernelStack: 20400 kB' 'PageTables: 8572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12656844 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315936 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.487 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.488 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86759472 kB' 'MemUsed: 10903212 kB' 'SwapCached: 0 kB' 'Active: 7206896 kB' 'Inactive: 251300 kB' 'Active(anon): 7009844 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 251300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013940 kB' 'Mapped: 116068 kB' 'AnonPages: 447440 kB' 'Shmem: 6565588 kB' 'KernelStack: 11688 kB' 'PageTables: 5768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 368388 kB' 'Slab: 675692 kB' 'SReclaimable: 368388 kB' 'SUnreclaim: 307304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.489 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:04.490 node0=1024 expecting 1024 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:04.490 17:58:57 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:07.020 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:07.021 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:07.021 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:07.021 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170386280 kB' 'MemAvailable: 173696248 kB' 'Buffers: 4132 kB' 'Cached: 14866024 kB' 'SwapCached: 0 kB' 'Active: 11716916 kB' 'Inactive: 3710384 kB' 'Active(anon): 11237908 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560236 kB' 'Mapped: 206796 kB' 'Shmem: 10680764 kB' 'KReclaimable: 530052 kB' 'Slab: 1166232 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 636180 kB' 'KernelStack: 20448 kB' 'PageTables: 9012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12657980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.286 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.287 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170386836 kB' 'MemAvailable: 173696804 kB' 'Buffers: 4132 kB' 'Cached: 14866028 kB' 'SwapCached: 0 kB' 'Active: 11716744 kB' 'Inactive: 3710384 kB' 'Active(anon): 11237736 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560380 kB' 'Mapped: 206720 kB' 'Shmem: 10680768 kB' 'KReclaimable: 530052 kB' 'Slab: 1166192 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 636140 kB' 'KernelStack: 20464 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12657996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.288 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.289 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170386676 kB' 'MemAvailable: 173696644 kB' 'Buffers: 4132 kB' 'Cached: 14866048 kB' 'SwapCached: 0 kB' 'Active: 11716576 kB' 'Inactive: 3710384 kB' 'Active(anon): 11237568 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560180 kB' 'Mapped: 206720 kB' 'Shmem: 10680788 kB' 'KReclaimable: 530052 kB' 'Slab: 1166176 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 636124 kB' 'KernelStack: 20464 kB' 'PageTables: 9060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12658020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.290 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.291 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:07.292 nr_hugepages=1024 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:07.292 resv_hugepages=0 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:07.292 surplus_hugepages=0 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:07.292 anon_hugepages=0 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381160 kB' 'MemFree: 170387404 kB' 'MemAvailable: 173697372 kB' 'Buffers: 4132 kB' 'Cached: 14866088 kB' 'SwapCached: 0 kB' 'Active: 11716040 kB' 'Inactive: 3710384 kB' 'Active(anon): 11237032 kB' 'Inactive(anon): 0 kB' 'Active(file): 479008 kB' 'Inactive(file): 3710384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559564 kB' 'Mapped: 206720 kB' 'Shmem: 10680828 kB' 'KReclaimable: 530052 kB' 'Slab: 1166176 kB' 'SReclaimable: 530052 kB' 'SUnreclaim: 636124 kB' 'KernelStack: 20448 kB' 'PageTables: 9008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030608 kB' 'Committed_AS: 12658040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315952 kB' 'VmallocChunk: 0 kB' 'Percpu: 94080 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3623892 kB' 'DirectMap2M: 45338624 kB' 'DirectMap1G: 153092096 kB' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.292 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:07.293 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 86742520 kB' 'MemUsed: 10920164 kB' 'SwapCached: 0 kB' 'Active: 7207324 kB' 'Inactive: 251300 kB' 'Active(anon): 7010272 kB' 'Inactive(anon): 0 kB' 'Active(file): 197052 kB' 'Inactive(file): 251300 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7013940 kB' 'Mapped: 116068 kB' 'AnonPages: 447932 kB' 'Shmem: 6565588 kB' 'KernelStack: 11704 kB' 'PageTables: 5868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 368388 kB' 'Slab: 676204 kB' 'SReclaimable: 368388 kB' 'SUnreclaim: 307816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.294 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:07.295 node0=1024 expecting 1024 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:07.295 00:04:07.295 real 0m5.741s 00:04:07.295 user 0m2.233s 00:04:07.295 sys 0m3.607s 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.295 17:59:00 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:07.295 ************************************ 00:04:07.295 END TEST no_shrink_alloc 00:04:07.295 ************************************ 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.295 17:59:00 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.295 00:04:07.295 real 0m21.998s 00:04:07.295 user 0m8.247s 00:04:07.295 sys 0m12.771s 00:04:07.552 17:59:00 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.552 17:59:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:07.552 ************************************ 00:04:07.552 END TEST hugepages 00:04:07.552 ************************************ 00:04:07.552 17:59:00 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.552 17:59:00 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.552 17:59:00 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.552 17:59:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:07.552 ************************************ 00:04:07.552 START TEST driver 00:04:07.552 ************************************ 00:04:07.552 17:59:00 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:07.552 * Looking for test storage... 00:04:07.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:07.552 17:59:00 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:07.552 17:59:00 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:07.552 17:59:00 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.737 17:59:04 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:11.737 17:59:04 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:11.737 17:59:04 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:11.737 17:59:04 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:11.737 ************************************ 00:04:11.737 START TEST guess_driver 00:04:11.737 ************************************ 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:11.737 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:11.737 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:11.737 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:11.737 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:11.737 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:11.737 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:11.737 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:11.737 Looking for driver=vfio-pci 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.737 17:59:04 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.267 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.267 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.267 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.267 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.267 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.267 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.268 17:59:06 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.204 17:59:08 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.204 17:59:08 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.205 17:59:08 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.462 17:59:08 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:15.462 17:59:08 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:15.462 17:59:08 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.462 17:59:08 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.676 00:04:19.676 real 0m8.083s 00:04:19.676 user 0m2.135s 00:04:19.676 sys 0m3.899s 00:04:19.676 17:59:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.676 17:59:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:19.676 ************************************ 00:04:19.676 END TEST guess_driver 00:04:19.676 ************************************ 00:04:19.676 00:04:19.676 real 0m12.024s 00:04:19.676 user 0m3.297s 00:04:19.676 sys 0m5.922s 00:04:19.676 17:59:12 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:19.676 17:59:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:19.676 ************************************ 00:04:19.676 END TEST driver 00:04:19.676 ************************************ 00:04:19.676 17:59:12 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:19.676 17:59:12 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.676 17:59:12 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.676 17:59:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.676 ************************************ 00:04:19.676 START TEST devices 00:04:19.676 ************************************ 00:04:19.676 17:59:12 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:19.676 * Looking for test storage... 00:04:19.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.676 17:59:12 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:19.676 17:59:12 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:19.676 17:59:12 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.676 17:59:12 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5f:00.0 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\f\:\0\0\.\0* ]] 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:22.957 17:59:15 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:22.957 No valid GPT data, bailing 00:04:22.957 17:59:15 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:22.957 17:59:15 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:22.957 17:59:15 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5f:00.0 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:22.957 17:59:15 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.957 17:59:15 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:22.957 ************************************ 00:04:22.957 START TEST nvme_mount 00:04:22.957 ************************************ 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:22.957 17:59:15 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:23.890 Creating new GPT entries in memory. 00:04:23.890 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:23.890 other utilities. 00:04:23.890 17:59:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:23.891 17:59:16 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:23.891 17:59:16 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:23.891 17:59:16 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:23.891 17:59:16 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:24.823 Creating new GPT entries in memory. 00:04:24.823 The operation has completed successfully. 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3211573 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.823 17:59:17 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.346 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.346 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:27.346 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:27.346 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:27.604 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.604 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.862 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:27.862 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:27.862 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:27.862 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:27.862 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:27.862 17:59:20 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:27.862 17:59:20 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.862 17:59:20 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:27.862 17:59:20 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:27.862 17:59:20 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5f:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.121 17:59:20 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:30.651 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5f:00.0 data@nvme0n1 '' '' 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.909 17:59:23 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.442 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:33.443 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:33.702 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:33.702 00:04:33.702 real 0m10.814s 00:04:33.702 user 0m3.203s 00:04:33.702 sys 0m5.444s 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.702 17:59:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:33.702 ************************************ 00:04:33.702 END TEST nvme_mount 00:04:33.702 ************************************ 00:04:33.702 17:59:26 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:33.702 17:59:26 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.702 17:59:26 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.702 17:59:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.702 ************************************ 00:04:33.702 START TEST dm_mount 00:04:33.702 ************************************ 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.702 17:59:26 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:34.636 Creating new GPT entries in memory. 00:04:34.636 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.636 other utilities. 00:04:34.636 17:59:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.636 17:59:27 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.636 17:59:27 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.636 17:59:27 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.636 17:59:27 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:36.010 Creating new GPT entries in memory. 00:04:36.010 The operation has completed successfully. 00:04:36.010 17:59:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.010 17:59:28 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.010 17:59:28 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:36.010 17:59:28 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:36.010 17:59:28 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:36.944 The operation has completed successfully. 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3215665 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5f:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:36.944 17:59:29 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.479 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:39.480 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5f:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5f:00.0 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5f:00.0 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.739 17:59:32 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5f:00.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.280 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\f\:\0\0\.\0 ]] 00:04:42.281 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:42.540 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:42.540 00:04:42.540 real 0m8.915s 00:04:42.540 user 0m2.204s 00:04:42.540 sys 0m3.767s 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.540 17:59:35 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:42.540 ************************************ 00:04:42.540 END TEST dm_mount 00:04:42.540 ************************************ 00:04:42.540 17:59:35 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:42.540 17:59:35 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:42.540 17:59:35 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:42.540 17:59:35 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.540 17:59:35 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:42.540 17:59:35 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.540 17:59:35 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:42.798 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:42.798 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:42.798 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:42.798 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:42.798 17:59:35 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:42.798 17:59:35 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:42.798 17:59:35 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:42.798 17:59:35 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:42.798 17:59:35 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:42.798 17:59:35 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:42.798 17:59:35 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:42.798 00:04:42.798 real 0m23.334s 00:04:42.798 user 0m6.680s 00:04:42.798 sys 0m11.420s 00:04:42.798 17:59:35 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.798 17:59:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:42.798 ************************************ 00:04:42.798 END TEST devices 00:04:42.798 ************************************ 00:04:43.056 00:04:43.056 real 1m17.637s 00:04:43.056 user 0m24.871s 00:04:43.056 sys 0m41.799s 00:04:43.056 17:59:35 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.056 17:59:35 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:43.056 ************************************ 00:04:43.056 END TEST setup.sh 00:04:43.056 ************************************ 00:04:43.056 17:59:35 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:45.011 Hugepages 00:04:45.011 node hugesize free / total 00:04:45.011 node0 1048576kB 0 / 0 00:04:45.011 node0 2048kB 2048 / 2048 00:04:45.011 node1 1048576kB 0 / 0 00:04:45.011 node1 2048kB 0 / 0 00:04:45.011 00:04:45.011 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:45.011 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:45.011 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:45.270 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:45.270 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:45.270 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:45.270 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:45.270 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:45.270 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:45.270 NVMe 0000:5f:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:45.270 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:45.270 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:45.270 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:45.270 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:45.270 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:45.270 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:45.270 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:45.270 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:45.270 17:59:38 -- spdk/autotest.sh@130 -- # uname -s 00:04:45.270 17:59:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:45.270 17:59:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:45.270 17:59:38 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.858 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.858 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.858 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.858 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.858 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.858 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.858 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.858 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:48.116 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:49.492 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.492 17:59:42 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:50.868 17:59:43 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:50.868 17:59:43 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:50.868 17:59:43 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.868 17:59:43 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:50.868 17:59:43 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:50.868 17:59:43 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:50.868 17:59:43 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.868 17:59:43 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:50.868 17:59:43 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:50.868 17:59:43 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:50.868 17:59:43 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:04:50.868 17:59:43 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.399 Waiting for block devices as requested 00:04:53.399 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:04:53.399 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.399 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:53.658 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:53.658 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:53.658 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.658 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.916 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.916 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.916 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:53.916 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:54.175 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:54.175 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:54.175 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:54.434 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:54.434 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:54.434 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:54.692 17:59:47 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:54.692 17:59:47 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5f:00.0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1502 -- # grep 0000:5f:00.0/nvme/nvme 00:04:54.692 17:59:47 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 ]] 00:04:54.692 17:59:47 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:03.0/0000:5f:00.0/nvme/nvme0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:54.692 17:59:47 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:54.692 17:59:47 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:54.692 17:59:47 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:54.692 17:59:47 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:54.692 17:59:47 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:54.692 17:59:47 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:54.692 17:59:47 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:54.692 17:59:47 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:54.692 17:59:47 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:54.693 17:59:47 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:54.693 17:59:47 -- common/autotest_common.sh@1557 -- # continue 00:04:54.693 17:59:47 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:54.693 17:59:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:54.693 17:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.693 17:59:47 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:54.693 17:59:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:54.693 17:59:47 -- common/autotest_common.sh@10 -- # set +x 00:04:54.693 17:59:47 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.219 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.219 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.609 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:04:58.609 17:59:51 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:58.609 17:59:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.609 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:58.867 17:59:51 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:58.867 17:59:51 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:58.867 17:59:51 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.867 17:59:51 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:58.867 17:59:51 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:58.867 17:59:51 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:58.867 17:59:51 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.867 17:59:51 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.867 17:59:51 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.867 17:59:51 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.867 17:59:51 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.867 17:59:51 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.867 17:59:51 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:04:58.867 17:59:51 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:58.867 17:59:51 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5f:00.0/device 00:04:58.867 17:59:51 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:58.867 17:59:51 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:58.867 17:59:51 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:58.867 17:59:51 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5f:00.0 00:04:58.867 17:59:51 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5f:00.0 ]] 00:04:58.867 17:59:51 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=3224571 00:04:58.867 17:59:51 -- common/autotest_common.sh@1598 -- # waitforlisten 3224571 00:04:58.867 17:59:51 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:58.867 17:59:51 -- common/autotest_common.sh@831 -- # '[' -z 3224571 ']' 00:04:58.867 17:59:51 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.867 17:59:51 -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:58.867 17:59:51 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.867 17:59:51 -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:58.867 17:59:51 -- common/autotest_common.sh@10 -- # set +x 00:04:58.867 [2024-07-24 17:59:51.855892] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:04:58.867 [2024-07-24 17:59:51.855938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3224571 ] 00:04:58.867 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.867 [2024-07-24 17:59:51.910610] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.124 [2024-07-24 17:59:51.990580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.689 17:59:52 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:59.689 17:59:52 -- common/autotest_common.sh@864 -- # return 0 00:04:59.689 17:59:52 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:59.689 17:59:52 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:59.689 17:59:52 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5f:00.0 00:05:02.970 nvme0n1 00:05:02.970 17:59:55 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:02.970 [2024-07-24 17:59:55.784689] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:02.970 request: 00:05:02.970 { 00:05:02.970 "nvme_ctrlr_name": "nvme0", 00:05:02.970 "password": "test", 00:05:02.970 "method": "bdev_nvme_opal_revert", 00:05:02.970 "req_id": 1 00:05:02.970 } 00:05:02.970 Got JSON-RPC error response 00:05:02.970 response: 00:05:02.970 { 00:05:02.970 "code": -32602, 00:05:02.970 "message": "Invalid parameters" 00:05:02.970 } 00:05:02.970 17:59:55 -- common/autotest_common.sh@1604 -- # true 00:05:02.970 17:59:55 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:02.970 17:59:55 -- common/autotest_common.sh@1608 -- # killprocess 3224571 00:05:02.970 17:59:55 -- common/autotest_common.sh@950 -- # '[' -z 3224571 ']' 00:05:02.970 17:59:55 -- common/autotest_common.sh@954 -- # kill -0 3224571 00:05:02.970 17:59:55 -- common/autotest_common.sh@955 -- # uname 00:05:02.970 17:59:55 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:02.970 17:59:55 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3224571 00:05:02.970 17:59:55 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:02.970 17:59:55 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:02.970 17:59:55 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3224571' 00:05:02.970 killing process with pid 3224571 00:05:02.970 17:59:55 -- common/autotest_common.sh@969 -- # kill 3224571 00:05:02.970 17:59:55 -- common/autotest_common.sh@974 -- # wait 3224571 00:05:05.501 17:59:58 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:05.501 17:59:58 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:05.501 17:59:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.501 17:59:58 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:05.501 17:59:58 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:05.501 17:59:58 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:05.501 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 17:59:58 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:05.501 17:59:58 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.501 17:59:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.501 17:59:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.501 17:59:58 -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 ************************************ 00:05:05.501 START TEST env 00:05:05.501 ************************************ 00:05:05.501 17:59:58 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:05.501 * Looking for test storage... 00:05:05.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:05.501 17:59:58 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.501 17:59:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.501 17:59:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.501 17:59:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 ************************************ 00:05:05.501 START TEST env_memory 00:05:05.501 ************************************ 00:05:05.501 17:59:58 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:05.501 00:05:05.501 00:05:05.501 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.501 http://cunit.sourceforge.net/ 00:05:05.501 00:05:05.501 00:05:05.501 Suite: memory 00:05:05.501 Test: alloc and free memory map ...[2024-07-24 17:59:58.203197] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.501 passed 00:05:05.501 Test: mem map translation ...[2024-07-24 17:59:58.221918] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.501 [2024-07-24 17:59:58.221932] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.501 [2024-07-24 17:59:58.221983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.501 [2024-07-24 17:59:58.221990] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.501 passed 00:05:05.501 Test: mem map registration ...[2024-07-24 17:59:58.259237] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:05.501 [2024-07-24 17:59:58.259251] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:05.501 passed 00:05:05.501 Test: mem map adjacent registrations ...passed 00:05:05.501 00:05:05.501 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.501 suites 1 1 n/a 0 0 00:05:05.501 tests 4 4 4 0 0 00:05:05.501 asserts 152 152 152 0 n/a 00:05:05.501 00:05:05.501 Elapsed time = 0.133 seconds 00:05:05.501 00:05:05.501 real 0m0.145s 00:05:05.501 user 0m0.139s 00:05:05.501 sys 0m0.006s 00:05:05.501 17:59:58 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:05.501 17:59:58 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 ************************************ 00:05:05.501 END TEST env_memory 00:05:05.501 ************************************ 00:05:05.501 17:59:58 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.501 17:59:58 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:05.501 17:59:58 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:05.501 17:59:58 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.501 ************************************ 00:05:05.501 START TEST env_vtophys 00:05:05.501 ************************************ 00:05:05.501 17:59:58 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:05.501 EAL: lib.eal log level changed from notice to debug 00:05:05.501 EAL: Detected lcore 0 as core 0 on socket 0 00:05:05.501 EAL: Detected lcore 1 as core 1 on socket 0 00:05:05.501 EAL: Detected lcore 2 as core 2 on socket 0 00:05:05.501 EAL: Detected lcore 3 as core 3 on socket 0 00:05:05.501 EAL: Detected lcore 4 as core 4 on socket 0 00:05:05.501 EAL: Detected lcore 5 as core 5 on socket 0 00:05:05.501 EAL: Detected lcore 6 as core 6 on socket 0 00:05:05.501 EAL: Detected lcore 7 as core 9 on socket 0 00:05:05.501 EAL: Detected lcore 8 as core 10 on socket 0 00:05:05.501 EAL: Detected lcore 9 as core 11 on socket 0 00:05:05.501 EAL: Detected lcore 10 as core 12 on socket 0 00:05:05.501 EAL: Detected lcore 11 as core 13 on socket 0 00:05:05.501 EAL: Detected lcore 12 as core 16 on socket 0 00:05:05.501 EAL: Detected lcore 13 as core 17 on socket 0 00:05:05.501 EAL: Detected lcore 14 as core 18 on socket 0 00:05:05.501 EAL: Detected lcore 15 as core 19 on socket 0 00:05:05.501 EAL: Detected lcore 16 as core 20 on socket 0 00:05:05.501 EAL: Detected lcore 17 as core 21 on socket 0 00:05:05.501 EAL: Detected lcore 18 as core 24 on socket 0 00:05:05.501 EAL: Detected lcore 19 as core 25 on socket 0 00:05:05.501 EAL: Detected lcore 20 as core 26 on socket 0 00:05:05.501 EAL: Detected lcore 21 as core 27 on socket 0 00:05:05.501 EAL: Detected lcore 22 as core 28 on socket 0 00:05:05.501 EAL: Detected lcore 23 as core 29 on socket 0 00:05:05.501 EAL: Detected lcore 24 as core 0 on socket 1 00:05:05.501 EAL: Detected lcore 25 as core 1 on socket 1 00:05:05.501 EAL: Detected lcore 26 as core 2 on socket 1 00:05:05.501 EAL: Detected lcore 27 as core 3 on socket 1 00:05:05.501 EAL: Detected lcore 28 as core 4 on socket 1 00:05:05.501 EAL: Detected lcore 29 as core 5 on socket 1 00:05:05.501 EAL: Detected lcore 30 as core 6 on socket 1 00:05:05.501 EAL: Detected lcore 31 as core 8 on socket 1 00:05:05.501 EAL: Detected lcore 32 as core 9 on socket 1 00:05:05.501 EAL: Detected lcore 33 as core 10 on socket 1 00:05:05.501 EAL: Detected lcore 34 as core 11 on socket 1 00:05:05.501 EAL: Detected lcore 35 as core 12 on socket 1 00:05:05.501 EAL: Detected lcore 36 as core 13 on socket 1 00:05:05.501 EAL: Detected lcore 37 as core 16 on socket 1 00:05:05.501 EAL: Detected lcore 38 as core 17 on socket 1 00:05:05.501 EAL: Detected lcore 39 as core 18 on socket 1 00:05:05.501 EAL: Detected lcore 40 as core 19 on socket 1 00:05:05.501 EAL: Detected lcore 41 as core 20 on socket 1 00:05:05.501 EAL: Detected lcore 42 as core 21 on socket 1 00:05:05.502 EAL: Detected lcore 43 as core 25 on socket 1 00:05:05.502 EAL: Detected lcore 44 as core 26 on socket 1 00:05:05.502 EAL: Detected lcore 45 as core 27 on socket 1 00:05:05.502 EAL: Detected lcore 46 as core 28 on socket 1 00:05:05.502 EAL: Detected lcore 47 as core 29 on socket 1 00:05:05.502 EAL: Detected lcore 48 as core 0 on socket 0 00:05:05.502 EAL: Detected lcore 49 as core 1 on socket 0 00:05:05.502 EAL: Detected lcore 50 as core 2 on socket 0 00:05:05.502 EAL: Detected lcore 51 as core 3 on socket 0 00:05:05.502 EAL: Detected lcore 52 as core 4 on socket 0 00:05:05.502 EAL: Detected lcore 53 as core 5 on socket 0 00:05:05.502 EAL: Detected lcore 54 as core 6 on socket 0 00:05:05.502 EAL: Detected lcore 55 as core 9 on socket 0 00:05:05.502 EAL: Detected lcore 56 as core 10 on socket 0 00:05:05.502 EAL: Detected lcore 57 as core 11 on socket 0 00:05:05.502 EAL: Detected lcore 58 as core 12 on socket 0 00:05:05.502 EAL: Detected lcore 59 as core 13 on socket 0 00:05:05.502 EAL: Detected lcore 60 as core 16 on socket 0 00:05:05.502 EAL: Detected lcore 61 as core 17 on socket 0 00:05:05.502 EAL: Detected lcore 62 as core 18 on socket 0 00:05:05.502 EAL: Detected lcore 63 as core 19 on socket 0 00:05:05.502 EAL: Detected lcore 64 as core 20 on socket 0 00:05:05.502 EAL: Detected lcore 65 as core 21 on socket 0 00:05:05.502 EAL: Detected lcore 66 as core 24 on socket 0 00:05:05.502 EAL: Detected lcore 67 as core 25 on socket 0 00:05:05.502 EAL: Detected lcore 68 as core 26 on socket 0 00:05:05.502 EAL: Detected lcore 69 as core 27 on socket 0 00:05:05.502 EAL: Detected lcore 70 as core 28 on socket 0 00:05:05.502 EAL: Detected lcore 71 as core 29 on socket 0 00:05:05.502 EAL: Detected lcore 72 as core 0 on socket 1 00:05:05.502 EAL: Detected lcore 73 as core 1 on socket 1 00:05:05.502 EAL: Detected lcore 74 as core 2 on socket 1 00:05:05.502 EAL: Detected lcore 75 as core 3 on socket 1 00:05:05.502 EAL: Detected lcore 76 as core 4 on socket 1 00:05:05.502 EAL: Detected lcore 77 as core 5 on socket 1 00:05:05.502 EAL: Detected lcore 78 as core 6 on socket 1 00:05:05.502 EAL: Detected lcore 79 as core 8 on socket 1 00:05:05.502 EAL: Detected lcore 80 as core 9 on socket 1 00:05:05.502 EAL: Detected lcore 81 as core 10 on socket 1 00:05:05.502 EAL: Detected lcore 82 as core 11 on socket 1 00:05:05.502 EAL: Detected lcore 83 as core 12 on socket 1 00:05:05.502 EAL: Detected lcore 84 as core 13 on socket 1 00:05:05.502 EAL: Detected lcore 85 as core 16 on socket 1 00:05:05.502 EAL: Detected lcore 86 as core 17 on socket 1 00:05:05.502 EAL: Detected lcore 87 as core 18 on socket 1 00:05:05.502 EAL: Detected lcore 88 as core 19 on socket 1 00:05:05.502 EAL: Detected lcore 89 as core 20 on socket 1 00:05:05.502 EAL: Detected lcore 90 as core 21 on socket 1 00:05:05.502 EAL: Detected lcore 91 as core 25 on socket 1 00:05:05.502 EAL: Detected lcore 92 as core 26 on socket 1 00:05:05.502 EAL: Detected lcore 93 as core 27 on socket 1 00:05:05.502 EAL: Detected lcore 94 as core 28 on socket 1 00:05:05.502 EAL: Detected lcore 95 as core 29 on socket 1 00:05:05.502 EAL: Maximum logical cores by configuration: 128 00:05:05.502 EAL: Detected CPU lcores: 96 00:05:05.502 EAL: Detected NUMA nodes: 2 00:05:05.502 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:05.502 EAL: Detected shared linkage of DPDK 00:05:05.502 EAL: No shared files mode enabled, IPC will be disabled 00:05:05.502 EAL: Bus pci wants IOVA as 'DC' 00:05:05.502 EAL: Buses did not request a specific IOVA mode. 00:05:05.502 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:05.502 EAL: Selected IOVA mode 'VA' 00:05:05.502 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.502 EAL: Probing VFIO support... 00:05:05.502 EAL: IOMMU type 1 (Type 1) is supported 00:05:05.502 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:05.502 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:05.502 EAL: VFIO support initialized 00:05:05.502 EAL: Ask a virtual area of 0x2e000 bytes 00:05:05.502 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:05.502 EAL: Setting up physically contiguous memory... 00:05:05.502 EAL: Setting maximum number of open files to 524288 00:05:05.502 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:05.502 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:05.502 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:05.502 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:05.502 EAL: Ask a virtual area of 0x61000 bytes 00:05:05.502 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:05.502 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:05.502 EAL: Ask a virtual area of 0x400000000 bytes 00:05:05.502 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:05.502 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:05.502 EAL: Hugepages will be freed exactly as allocated. 00:05:05.502 EAL: No shared files mode enabled, IPC is disabled 00:05:05.502 EAL: No shared files mode enabled, IPC is disabled 00:05:05.502 EAL: TSC frequency is ~2100000 KHz 00:05:05.502 EAL: Main lcore 0 is ready (tid=7f742ec9fa00;cpuset=[0]) 00:05:05.502 EAL: Trying to obtain current memory policy. 00:05:05.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.502 EAL: Restoring previous memory policy: 0 00:05:05.502 EAL: request: mp_malloc_sync 00:05:05.502 EAL: No shared files mode enabled, IPC is disabled 00:05:05.502 EAL: Heap on socket 0 was expanded by 2MB 00:05:05.502 EAL: No shared files mode enabled, IPC is disabled 00:05:05.502 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:05.502 EAL: Mem event callback 'spdk:(nil)' registered 00:05:05.502 00:05:05.502 00:05:05.502 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.502 http://cunit.sourceforge.net/ 00:05:05.502 00:05:05.502 00:05:05.502 Suite: components_suite 00:05:05.502 Test: vtophys_malloc_test ...passed 00:05:05.502 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:05.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.502 EAL: Restoring previous memory policy: 4 00:05:05.502 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.502 EAL: request: mp_malloc_sync 00:05:05.502 EAL: No shared files mode enabled, IPC is disabled 00:05:05.502 EAL: Heap on socket 0 was expanded by 4MB 00:05:05.502 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.502 EAL: request: mp_malloc_sync 00:05:05.502 EAL: No shared files mode enabled, IPC is disabled 00:05:05.502 EAL: Heap on socket 0 was shrunk by 4MB 00:05:05.502 EAL: Trying to obtain current memory policy. 00:05:05.502 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.503 EAL: Restoring previous memory policy: 4 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was expanded by 6MB 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was shrunk by 6MB 00:05:05.503 EAL: Trying to obtain current memory policy. 00:05:05.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.503 EAL: Restoring previous memory policy: 4 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was expanded by 10MB 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was shrunk by 10MB 00:05:05.503 EAL: Trying to obtain current memory policy. 00:05:05.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.503 EAL: Restoring previous memory policy: 4 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was expanded by 18MB 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was shrunk by 18MB 00:05:05.503 EAL: Trying to obtain current memory policy. 00:05:05.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.503 EAL: Restoring previous memory policy: 4 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was expanded by 34MB 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was shrunk by 34MB 00:05:05.503 EAL: Trying to obtain current memory policy. 00:05:05.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.503 EAL: Restoring previous memory policy: 4 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was expanded by 66MB 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was shrunk by 66MB 00:05:05.503 EAL: Trying to obtain current memory policy. 00:05:05.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.503 EAL: Restoring previous memory policy: 4 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was expanded by 130MB 00:05:05.503 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.503 EAL: request: mp_malloc_sync 00:05:05.503 EAL: No shared files mode enabled, IPC is disabled 00:05:05.503 EAL: Heap on socket 0 was shrunk by 130MB 00:05:05.503 EAL: Trying to obtain current memory policy. 00:05:05.503 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.761 EAL: Restoring previous memory policy: 4 00:05:05.761 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.761 EAL: request: mp_malloc_sync 00:05:05.761 EAL: No shared files mode enabled, IPC is disabled 00:05:05.761 EAL: Heap on socket 0 was expanded by 258MB 00:05:05.761 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.761 EAL: request: mp_malloc_sync 00:05:05.761 EAL: No shared files mode enabled, IPC is disabled 00:05:05.761 EAL: Heap on socket 0 was shrunk by 258MB 00:05:05.761 EAL: Trying to obtain current memory policy. 00:05:05.761 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.761 EAL: Restoring previous memory policy: 4 00:05:05.761 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.761 EAL: request: mp_malloc_sync 00:05:05.761 EAL: No shared files mode enabled, IPC is disabled 00:05:05.761 EAL: Heap on socket 0 was expanded by 514MB 00:05:06.019 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.019 EAL: request: mp_malloc_sync 00:05:06.019 EAL: No shared files mode enabled, IPC is disabled 00:05:06.019 EAL: Heap on socket 0 was shrunk by 514MB 00:05:06.019 EAL: Trying to obtain current memory policy. 00:05:06.019 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.277 EAL: Restoring previous memory policy: 4 00:05:06.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.277 EAL: request: mp_malloc_sync 00:05:06.277 EAL: No shared files mode enabled, IPC is disabled 00:05:06.277 EAL: Heap on socket 0 was expanded by 1026MB 00:05:06.277 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.536 EAL: request: mp_malloc_sync 00:05:06.536 EAL: No shared files mode enabled, IPC is disabled 00:05:06.536 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.536 passed 00:05:06.536 00:05:06.536 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.536 suites 1 1 n/a 0 0 00:05:06.536 tests 2 2 2 0 0 00:05:06.536 asserts 497 497 497 0 n/a 00:05:06.536 00:05:06.536 Elapsed time = 0.963 seconds 00:05:06.536 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.536 EAL: request: mp_malloc_sync 00:05:06.536 EAL: No shared files mode enabled, IPC is disabled 00:05:06.536 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.536 EAL: No shared files mode enabled, IPC is disabled 00:05:06.536 EAL: No shared files mode enabled, IPC is disabled 00:05:06.536 EAL: No shared files mode enabled, IPC is disabled 00:05:06.536 00:05:06.536 real 0m1.069s 00:05:06.536 user 0m0.634s 00:05:06.536 sys 0m0.414s 00:05:06.536 17:59:59 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.536 17:59:59 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.536 ************************************ 00:05:06.536 END TEST env_vtophys 00:05:06.536 ************************************ 00:05:06.536 17:59:59 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.536 17:59:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.536 17:59:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.536 17:59:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.536 ************************************ 00:05:06.536 START TEST env_pci 00:05:06.536 ************************************ 00:05:06.536 17:59:59 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:06.536 00:05:06.536 00:05:06.536 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.536 http://cunit.sourceforge.net/ 00:05:06.536 00:05:06.536 00:05:06.536 Suite: pci 00:05:06.536 Test: pci_hook ...[2024-07-24 17:59:59.532725] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3225891 has claimed it 00:05:06.536 EAL: Cannot find device (10000:00:01.0) 00:05:06.536 EAL: Failed to attach device on primary process 00:05:06.536 passed 00:05:06.536 00:05:06.536 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.536 suites 1 1 n/a 0 0 00:05:06.536 tests 1 1 1 0 0 00:05:06.536 asserts 25 25 25 0 n/a 00:05:06.536 00:05:06.536 Elapsed time = 0.025 seconds 00:05:06.536 00:05:06.536 real 0m0.045s 00:05:06.536 user 0m0.013s 00:05:06.536 sys 0m0.031s 00:05:06.536 17:59:59 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.536 17:59:59 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.536 ************************************ 00:05:06.536 END TEST env_pci 00:05:06.536 ************************************ 00:05:06.536 17:59:59 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.536 17:59:59 env -- env/env.sh@15 -- # uname 00:05:06.536 17:59:59 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.536 17:59:59 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.536 17:59:59 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.536 17:59:59 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:06.537 17:59:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.537 17:59:59 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.795 ************************************ 00:05:06.795 START TEST env_dpdk_post_init 00:05:06.795 ************************************ 00:05:06.795 17:59:59 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.795 EAL: Detected CPU lcores: 96 00:05:06.795 EAL: Detected NUMA nodes: 2 00:05:06.795 EAL: Detected shared linkage of DPDK 00:05:06.795 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.795 EAL: Selected IOVA mode 'VA' 00:05:06.795 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.795 EAL: VFIO support initialized 00:05:06.795 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.796 EAL: Using IOMMU type 1 (Type 1) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:06.796 EAL: Ignore mapping IO port bar(1) 00:05:06.796 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:07.731 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5f:00.0 (socket 0) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:07.731 EAL: Ignore mapping IO port bar(1) 00:05:07.731 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:11.096 EAL: Releasing PCI mapped resource for 0000:5f:00.0 00:05:11.096 EAL: Calling pci_unmap_resource for 0000:5f:00.0 at 0x202001020000 00:05:11.662 Starting DPDK initialization... 00:05:11.662 Starting SPDK post initialization... 00:05:11.662 SPDK NVMe probe 00:05:11.662 Attaching to 0000:5f:00.0 00:05:11.662 Attached to 0000:5f:00.0 00:05:11.662 Cleaning up... 00:05:11.662 00:05:11.662 real 0m4.897s 00:05:11.663 user 0m3.831s 00:05:11.663 sys 0m0.137s 00:05:11.663 18:00:04 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.663 18:00:04 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:11.663 ************************************ 00:05:11.663 END TEST env_dpdk_post_init 00:05:11.663 ************************************ 00:05:11.663 18:00:04 env -- env/env.sh@26 -- # uname 00:05:11.663 18:00:04 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:11.663 18:00:04 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.663 18:00:04 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.663 18:00:04 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.663 18:00:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.663 ************************************ 00:05:11.663 START TEST env_mem_callbacks 00:05:11.663 ************************************ 00:05:11.663 18:00:04 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:11.663 EAL: Detected CPU lcores: 96 00:05:11.663 EAL: Detected NUMA nodes: 2 00:05:11.663 EAL: Detected shared linkage of DPDK 00:05:11.663 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:11.663 EAL: Selected IOVA mode 'VA' 00:05:11.663 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.663 EAL: VFIO support initialized 00:05:11.663 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:11.663 00:05:11.663 00:05:11.663 CUnit - A unit testing framework for C - Version 2.1-3 00:05:11.663 http://cunit.sourceforge.net/ 00:05:11.663 00:05:11.663 00:05:11.663 Suite: memory 00:05:11.663 Test: test ... 00:05:11.663 register 0x200000200000 2097152 00:05:11.663 malloc 3145728 00:05:11.663 register 0x200000400000 4194304 00:05:11.663 buf 0x200000500000 len 3145728 PASSED 00:05:11.663 malloc 64 00:05:11.663 buf 0x2000004fff40 len 64 PASSED 00:05:11.663 malloc 4194304 00:05:11.663 register 0x200000800000 6291456 00:05:11.663 buf 0x200000a00000 len 4194304 PASSED 00:05:11.663 free 0x200000500000 3145728 00:05:11.663 free 0x2000004fff40 64 00:05:11.663 unregister 0x200000400000 4194304 PASSED 00:05:11.663 free 0x200000a00000 4194304 00:05:11.663 unregister 0x200000800000 6291456 PASSED 00:05:11.663 malloc 8388608 00:05:11.663 register 0x200000400000 10485760 00:05:11.663 buf 0x200000600000 len 8388608 PASSED 00:05:11.663 free 0x200000600000 8388608 00:05:11.663 unregister 0x200000400000 10485760 PASSED 00:05:11.663 passed 00:05:11.663 00:05:11.663 Run Summary: Type Total Ran Passed Failed Inactive 00:05:11.663 suites 1 1 n/a 0 0 00:05:11.663 tests 1 1 1 0 0 00:05:11.663 asserts 15 15 15 0 n/a 00:05:11.663 00:05:11.663 Elapsed time = 0.006 seconds 00:05:11.663 00:05:11.663 real 0m0.057s 00:05:11.663 user 0m0.017s 00:05:11.663 sys 0m0.040s 00:05:11.663 18:00:04 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.663 18:00:04 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:11.663 ************************************ 00:05:11.663 END TEST env_mem_callbacks 00:05:11.663 ************************************ 00:05:11.663 00:05:11.663 real 0m6.657s 00:05:11.663 user 0m4.806s 00:05:11.663 sys 0m0.932s 00:05:11.663 18:00:04 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:11.663 18:00:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:11.663 ************************************ 00:05:11.663 END TEST env 00:05:11.663 ************************************ 00:05:11.663 18:00:04 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.663 18:00:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:11.663 18:00:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:11.663 18:00:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.921 ************************************ 00:05:11.921 START TEST rpc 00:05:11.921 ************************************ 00:05:11.921 18:00:04 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:11.921 * Looking for test storage... 00:05:11.921 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.921 18:00:04 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3227057 00:05:11.921 18:00:04 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.921 18:00:04 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:11.921 18:00:04 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3227057 00:05:11.921 18:00:04 rpc -- common/autotest_common.sh@831 -- # '[' -z 3227057 ']' 00:05:11.921 18:00:04 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.921 18:00:04 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:11.921 18:00:04 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.921 18:00:04 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:11.921 18:00:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.921 [2024-07-24 18:00:04.917101] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:11.921 [2024-07-24 18:00:04.917146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227057 ] 00:05:11.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.921 [2024-07-24 18:00:04.971054] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.178 [2024-07-24 18:00:05.051066] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:12.179 [2024-07-24 18:00:05.051102] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3227057' to capture a snapshot of events at runtime. 00:05:12.179 [2024-07-24 18:00:05.051111] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:12.179 [2024-07-24 18:00:05.051119] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:12.179 [2024-07-24 18:00:05.051124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3227057 for offline analysis/debug. 00:05:12.179 [2024-07-24 18:00:05.051142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.760 18:00:05 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.760 18:00:05 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:12.760 18:00:05 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.760 18:00:05 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:12.760 18:00:05 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:12.760 18:00:05 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:12.760 18:00:05 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.760 18:00:05 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.760 18:00:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.760 ************************************ 00:05:12.760 START TEST rpc_integrity 00:05:12.760 ************************************ 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:12.760 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.760 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:12.760 { 00:05:12.760 "name": "Malloc0", 00:05:12.760 "aliases": [ 00:05:12.760 "de88393a-e541-4019-8d34-73833ab43a8a" 00:05:12.760 ], 00:05:12.760 "product_name": "Malloc disk", 00:05:12.760 "block_size": 512, 00:05:12.760 "num_blocks": 16384, 00:05:12.760 "uuid": "de88393a-e541-4019-8d34-73833ab43a8a", 00:05:12.760 "assigned_rate_limits": { 00:05:12.760 "rw_ios_per_sec": 0, 00:05:12.760 "rw_mbytes_per_sec": 0, 00:05:12.760 "r_mbytes_per_sec": 0, 00:05:12.760 "w_mbytes_per_sec": 0 00:05:12.760 }, 00:05:12.760 "claimed": false, 00:05:12.760 "zoned": false, 00:05:12.761 "supported_io_types": { 00:05:12.761 "read": true, 00:05:12.761 "write": true, 00:05:12.761 "unmap": true, 00:05:12.761 "flush": true, 00:05:12.761 "reset": true, 00:05:12.761 "nvme_admin": false, 00:05:12.761 "nvme_io": false, 00:05:12.761 "nvme_io_md": false, 00:05:12.761 "write_zeroes": true, 00:05:12.761 "zcopy": true, 00:05:12.761 "get_zone_info": false, 00:05:12.761 "zone_management": false, 00:05:12.761 "zone_append": false, 00:05:12.761 "compare": false, 00:05:12.761 "compare_and_write": false, 00:05:12.761 "abort": true, 00:05:12.761 "seek_hole": false, 00:05:12.761 "seek_data": false, 00:05:12.761 "copy": true, 00:05:12.761 "nvme_iov_md": false 00:05:12.761 }, 00:05:12.761 "memory_domains": [ 00:05:12.761 { 00:05:12.761 "dma_device_id": "system", 00:05:12.761 "dma_device_type": 1 00:05:12.761 }, 00:05:12.761 { 00:05:12.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:12.761 "dma_device_type": 2 00:05:12.761 } 00:05:12.761 ], 00:05:12.761 "driver_specific": {} 00:05:12.761 } 00:05:12.761 ]' 00:05:12.761 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 [2024-07-24 18:00:05.866302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:13.056 [2024-07-24 18:00:05.866330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.056 [2024-07-24 18:00:05.866342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xae52d0 00:05:13.056 [2024-07-24 18:00:05.866348] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.056 [2024-07-24 18:00:05.867430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.056 [2024-07-24 18:00:05.867451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.056 Passthru0 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.056 { 00:05:13.056 "name": "Malloc0", 00:05:13.056 "aliases": [ 00:05:13.056 "de88393a-e541-4019-8d34-73833ab43a8a" 00:05:13.056 ], 00:05:13.056 "product_name": "Malloc disk", 00:05:13.056 "block_size": 512, 00:05:13.056 "num_blocks": 16384, 00:05:13.056 "uuid": "de88393a-e541-4019-8d34-73833ab43a8a", 00:05:13.056 "assigned_rate_limits": { 00:05:13.056 "rw_ios_per_sec": 0, 00:05:13.056 "rw_mbytes_per_sec": 0, 00:05:13.056 "r_mbytes_per_sec": 0, 00:05:13.056 "w_mbytes_per_sec": 0 00:05:13.056 }, 00:05:13.056 "claimed": true, 00:05:13.056 "claim_type": "exclusive_write", 00:05:13.056 "zoned": false, 00:05:13.056 "supported_io_types": { 00:05:13.056 "read": true, 00:05:13.056 "write": true, 00:05:13.056 "unmap": true, 00:05:13.056 "flush": true, 00:05:13.056 "reset": true, 00:05:13.056 "nvme_admin": false, 00:05:13.056 "nvme_io": false, 00:05:13.056 "nvme_io_md": false, 00:05:13.056 "write_zeroes": true, 00:05:13.056 "zcopy": true, 00:05:13.056 "get_zone_info": false, 00:05:13.056 "zone_management": false, 00:05:13.056 "zone_append": false, 00:05:13.056 "compare": false, 00:05:13.056 "compare_and_write": false, 00:05:13.056 "abort": true, 00:05:13.056 "seek_hole": false, 00:05:13.056 "seek_data": false, 00:05:13.056 "copy": true, 00:05:13.056 "nvme_iov_md": false 00:05:13.056 }, 00:05:13.056 "memory_domains": [ 00:05:13.056 { 00:05:13.056 "dma_device_id": "system", 00:05:13.056 "dma_device_type": 1 00:05:13.056 }, 00:05:13.056 { 00:05:13.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.056 "dma_device_type": 2 00:05:13.056 } 00:05:13.056 ], 00:05:13.056 "driver_specific": {} 00:05:13.056 }, 00:05:13.056 { 00:05:13.056 "name": "Passthru0", 00:05:13.056 "aliases": [ 00:05:13.056 "0f96fe16-be44-516e-80ab-018e4e341587" 00:05:13.056 ], 00:05:13.056 "product_name": "passthru", 00:05:13.056 "block_size": 512, 00:05:13.056 "num_blocks": 16384, 00:05:13.056 "uuid": "0f96fe16-be44-516e-80ab-018e4e341587", 00:05:13.056 "assigned_rate_limits": { 00:05:13.056 "rw_ios_per_sec": 0, 00:05:13.056 "rw_mbytes_per_sec": 0, 00:05:13.056 "r_mbytes_per_sec": 0, 00:05:13.056 "w_mbytes_per_sec": 0 00:05:13.056 }, 00:05:13.056 "claimed": false, 00:05:13.056 "zoned": false, 00:05:13.056 "supported_io_types": { 00:05:13.056 "read": true, 00:05:13.056 "write": true, 00:05:13.056 "unmap": true, 00:05:13.056 "flush": true, 00:05:13.056 "reset": true, 00:05:13.056 "nvme_admin": false, 00:05:13.056 "nvme_io": false, 00:05:13.056 "nvme_io_md": false, 00:05:13.056 "write_zeroes": true, 00:05:13.056 "zcopy": true, 00:05:13.056 "get_zone_info": false, 00:05:13.056 "zone_management": false, 00:05:13.056 "zone_append": false, 00:05:13.056 "compare": false, 00:05:13.056 "compare_and_write": false, 00:05:13.056 "abort": true, 00:05:13.056 "seek_hole": false, 00:05:13.056 "seek_data": false, 00:05:13.056 "copy": true, 00:05:13.056 "nvme_iov_md": false 00:05:13.056 }, 00:05:13.056 "memory_domains": [ 00:05:13.056 { 00:05:13.056 "dma_device_id": "system", 00:05:13.056 "dma_device_type": 1 00:05:13.056 }, 00:05:13.056 { 00:05:13.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.056 "dma_device_type": 2 00:05:13.056 } 00:05:13.056 ], 00:05:13.056 "driver_specific": { 00:05:13.056 "passthru": { 00:05:13.056 "name": "Passthru0", 00:05:13.056 "base_bdev_name": "Malloc0" 00:05:13.056 } 00:05:13.056 } 00:05:13.056 } 00:05:13.056 ]' 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 18:00:05 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.056 18:00:05 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.056 18:00:06 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.056 00:05:13.056 real 0m0.281s 00:05:13.056 user 0m0.179s 00:05:13.056 sys 0m0.036s 00:05:13.056 18:00:06 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.056 18:00:06 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 ************************************ 00:05:13.056 END TEST rpc_integrity 00:05:13.056 ************************************ 00:05:13.056 18:00:06 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:13.056 18:00:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.056 18:00:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.056 18:00:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 ************************************ 00:05:13.056 START TEST rpc_plugins 00:05:13.056 ************************************ 00:05:13.056 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:13.056 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:13.056 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.056 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.056 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:13.056 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:13.056 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.056 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.056 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.056 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:13.056 { 00:05:13.056 "name": "Malloc1", 00:05:13.056 "aliases": [ 00:05:13.056 "d4196269-bac8-4087-aea1-556e0a63dfbe" 00:05:13.056 ], 00:05:13.056 "product_name": "Malloc disk", 00:05:13.056 "block_size": 4096, 00:05:13.056 "num_blocks": 256, 00:05:13.056 "uuid": "d4196269-bac8-4087-aea1-556e0a63dfbe", 00:05:13.056 "assigned_rate_limits": { 00:05:13.056 "rw_ios_per_sec": 0, 00:05:13.056 "rw_mbytes_per_sec": 0, 00:05:13.056 "r_mbytes_per_sec": 0, 00:05:13.056 "w_mbytes_per_sec": 0 00:05:13.056 }, 00:05:13.056 "claimed": false, 00:05:13.056 "zoned": false, 00:05:13.056 "supported_io_types": { 00:05:13.056 "read": true, 00:05:13.056 "write": true, 00:05:13.056 "unmap": true, 00:05:13.056 "flush": true, 00:05:13.056 "reset": true, 00:05:13.056 "nvme_admin": false, 00:05:13.056 "nvme_io": false, 00:05:13.056 "nvme_io_md": false, 00:05:13.056 "write_zeroes": true, 00:05:13.056 "zcopy": true, 00:05:13.056 "get_zone_info": false, 00:05:13.056 "zone_management": false, 00:05:13.056 "zone_append": false, 00:05:13.056 "compare": false, 00:05:13.056 "compare_and_write": false, 00:05:13.056 "abort": true, 00:05:13.056 "seek_hole": false, 00:05:13.056 "seek_data": false, 00:05:13.056 "copy": true, 00:05:13.056 "nvme_iov_md": false 00:05:13.056 }, 00:05:13.056 "memory_domains": [ 00:05:13.056 { 00:05:13.056 "dma_device_id": "system", 00:05:13.056 "dma_device_type": 1 00:05:13.056 }, 00:05:13.056 { 00:05:13.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.056 "dma_device_type": 2 00:05:13.056 } 00:05:13.056 ], 00:05:13.056 "driver_specific": {} 00:05:13.056 } 00:05:13.056 ]' 00:05:13.056 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:13.314 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:13.314 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.314 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.314 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:13.314 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:13.314 18:00:06 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:13.314 00:05:13.314 real 0m0.137s 00:05:13.314 user 0m0.086s 00:05:13.314 sys 0m0.018s 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.314 18:00:06 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:13.314 ************************************ 00:05:13.314 END TEST rpc_plugins 00:05:13.314 ************************************ 00:05:13.314 18:00:06 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:13.314 18:00:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.314 18:00:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.314 18:00:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.314 ************************************ 00:05:13.314 START TEST rpc_trace_cmd_test 00:05:13.314 ************************************ 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:13.314 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3227057", 00:05:13.314 "tpoint_group_mask": "0x8", 00:05:13.314 "iscsi_conn": { 00:05:13.314 "mask": "0x2", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "scsi": { 00:05:13.314 "mask": "0x4", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "bdev": { 00:05:13.314 "mask": "0x8", 00:05:13.314 "tpoint_mask": "0xffffffffffffffff" 00:05:13.314 }, 00:05:13.314 "nvmf_rdma": { 00:05:13.314 "mask": "0x10", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "nvmf_tcp": { 00:05:13.314 "mask": "0x20", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "ftl": { 00:05:13.314 "mask": "0x40", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "blobfs": { 00:05:13.314 "mask": "0x80", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "dsa": { 00:05:13.314 "mask": "0x200", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "thread": { 00:05:13.314 "mask": "0x400", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "nvme_pcie": { 00:05:13.314 "mask": "0x800", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "iaa": { 00:05:13.314 "mask": "0x1000", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "nvme_tcp": { 00:05:13.314 "mask": "0x2000", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "bdev_nvme": { 00:05:13.314 "mask": "0x4000", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 }, 00:05:13.314 "sock": { 00:05:13.314 "mask": "0x8000", 00:05:13.314 "tpoint_mask": "0x0" 00:05:13.314 } 00:05:13.314 }' 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:13.314 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:13.571 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:13.571 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:13.571 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:13.571 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:13.571 18:00:06 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:13.571 00:05:13.571 real 0m0.225s 00:05:13.571 user 0m0.197s 00:05:13.571 sys 0m0.020s 00:05:13.571 18:00:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.571 18:00:06 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 ************************************ 00:05:13.571 END TEST rpc_trace_cmd_test 00:05:13.571 ************************************ 00:05:13.571 18:00:06 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:13.571 18:00:06 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:13.571 18:00:06 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:13.571 18:00:06 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:13.571 18:00:06 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:13.571 18:00:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 ************************************ 00:05:13.571 START TEST rpc_daemon_integrity 00:05:13.571 ************************************ 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.571 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:13.829 { 00:05:13.829 "name": "Malloc2", 00:05:13.829 "aliases": [ 00:05:13.829 "fa1e5a4b-b947-4750-a94d-820f13fb15b2" 00:05:13.829 ], 00:05:13.829 "product_name": "Malloc disk", 00:05:13.829 "block_size": 512, 00:05:13.829 "num_blocks": 16384, 00:05:13.829 "uuid": "fa1e5a4b-b947-4750-a94d-820f13fb15b2", 00:05:13.829 "assigned_rate_limits": { 00:05:13.829 "rw_ios_per_sec": 0, 00:05:13.829 "rw_mbytes_per_sec": 0, 00:05:13.829 "r_mbytes_per_sec": 0, 00:05:13.829 "w_mbytes_per_sec": 0 00:05:13.829 }, 00:05:13.829 "claimed": false, 00:05:13.829 "zoned": false, 00:05:13.829 "supported_io_types": { 00:05:13.829 "read": true, 00:05:13.829 "write": true, 00:05:13.829 "unmap": true, 00:05:13.829 "flush": true, 00:05:13.829 "reset": true, 00:05:13.829 "nvme_admin": false, 00:05:13.829 "nvme_io": false, 00:05:13.829 "nvme_io_md": false, 00:05:13.829 "write_zeroes": true, 00:05:13.829 "zcopy": true, 00:05:13.829 "get_zone_info": false, 00:05:13.829 "zone_management": false, 00:05:13.829 "zone_append": false, 00:05:13.829 "compare": false, 00:05:13.829 "compare_and_write": false, 00:05:13.829 "abort": true, 00:05:13.829 "seek_hole": false, 00:05:13.829 "seek_data": false, 00:05:13.829 "copy": true, 00:05:13.829 "nvme_iov_md": false 00:05:13.829 }, 00:05:13.829 "memory_domains": [ 00:05:13.829 { 00:05:13.829 "dma_device_id": "system", 00:05:13.829 "dma_device_type": 1 00:05:13.829 }, 00:05:13.829 { 00:05:13.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.829 "dma_device_type": 2 00:05:13.829 } 00:05:13.829 ], 00:05:13.829 "driver_specific": {} 00:05:13.829 } 00:05:13.829 ]' 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.829 [2024-07-24 18:00:06.704590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:13.829 [2024-07-24 18:00:06.704617] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:13.829 [2024-07-24 18:00:06.704629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc7cac0 00:05:13.829 [2024-07-24 18:00:06.704636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:13.829 [2024-07-24 18:00:06.705557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:13.829 [2024-07-24 18:00:06.705578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:13.829 Passthru0 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.829 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:13.829 { 00:05:13.829 "name": "Malloc2", 00:05:13.829 "aliases": [ 00:05:13.829 "fa1e5a4b-b947-4750-a94d-820f13fb15b2" 00:05:13.829 ], 00:05:13.829 "product_name": "Malloc disk", 00:05:13.829 "block_size": 512, 00:05:13.829 "num_blocks": 16384, 00:05:13.829 "uuid": "fa1e5a4b-b947-4750-a94d-820f13fb15b2", 00:05:13.829 "assigned_rate_limits": { 00:05:13.829 "rw_ios_per_sec": 0, 00:05:13.829 "rw_mbytes_per_sec": 0, 00:05:13.829 "r_mbytes_per_sec": 0, 00:05:13.829 "w_mbytes_per_sec": 0 00:05:13.829 }, 00:05:13.829 "claimed": true, 00:05:13.829 "claim_type": "exclusive_write", 00:05:13.829 "zoned": false, 00:05:13.829 "supported_io_types": { 00:05:13.829 "read": true, 00:05:13.829 "write": true, 00:05:13.829 "unmap": true, 00:05:13.829 "flush": true, 00:05:13.829 "reset": true, 00:05:13.829 "nvme_admin": false, 00:05:13.829 "nvme_io": false, 00:05:13.829 "nvme_io_md": false, 00:05:13.829 "write_zeroes": true, 00:05:13.829 "zcopy": true, 00:05:13.830 "get_zone_info": false, 00:05:13.830 "zone_management": false, 00:05:13.830 "zone_append": false, 00:05:13.830 "compare": false, 00:05:13.830 "compare_and_write": false, 00:05:13.830 "abort": true, 00:05:13.830 "seek_hole": false, 00:05:13.830 "seek_data": false, 00:05:13.830 "copy": true, 00:05:13.830 "nvme_iov_md": false 00:05:13.830 }, 00:05:13.830 "memory_domains": [ 00:05:13.830 { 00:05:13.830 "dma_device_id": "system", 00:05:13.830 "dma_device_type": 1 00:05:13.830 }, 00:05:13.830 { 00:05:13.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.830 "dma_device_type": 2 00:05:13.830 } 00:05:13.830 ], 00:05:13.830 "driver_specific": {} 00:05:13.830 }, 00:05:13.830 { 00:05:13.830 "name": "Passthru0", 00:05:13.830 "aliases": [ 00:05:13.830 "a54e6c3b-1f65-5217-b060-6cf06458ab67" 00:05:13.830 ], 00:05:13.830 "product_name": "passthru", 00:05:13.830 "block_size": 512, 00:05:13.830 "num_blocks": 16384, 00:05:13.830 "uuid": "a54e6c3b-1f65-5217-b060-6cf06458ab67", 00:05:13.830 "assigned_rate_limits": { 00:05:13.830 "rw_ios_per_sec": 0, 00:05:13.830 "rw_mbytes_per_sec": 0, 00:05:13.830 "r_mbytes_per_sec": 0, 00:05:13.830 "w_mbytes_per_sec": 0 00:05:13.830 }, 00:05:13.830 "claimed": false, 00:05:13.830 "zoned": false, 00:05:13.830 "supported_io_types": { 00:05:13.830 "read": true, 00:05:13.830 "write": true, 00:05:13.830 "unmap": true, 00:05:13.830 "flush": true, 00:05:13.830 "reset": true, 00:05:13.830 "nvme_admin": false, 00:05:13.830 "nvme_io": false, 00:05:13.830 "nvme_io_md": false, 00:05:13.830 "write_zeroes": true, 00:05:13.830 "zcopy": true, 00:05:13.830 "get_zone_info": false, 00:05:13.830 "zone_management": false, 00:05:13.830 "zone_append": false, 00:05:13.830 "compare": false, 00:05:13.830 "compare_and_write": false, 00:05:13.830 "abort": true, 00:05:13.830 "seek_hole": false, 00:05:13.830 "seek_data": false, 00:05:13.830 "copy": true, 00:05:13.830 "nvme_iov_md": false 00:05:13.830 }, 00:05:13.830 "memory_domains": [ 00:05:13.830 { 00:05:13.830 "dma_device_id": "system", 00:05:13.830 "dma_device_type": 1 00:05:13.830 }, 00:05:13.830 { 00:05:13.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:13.830 "dma_device_type": 2 00:05:13.830 } 00:05:13.830 ], 00:05:13.830 "driver_specific": { 00:05:13.830 "passthru": { 00:05:13.830 "name": "Passthru0", 00:05:13.830 "base_bdev_name": "Malloc2" 00:05:13.830 } 00:05:13.830 } 00:05:13.830 } 00:05:13.830 ]' 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:13.830 00:05:13.830 real 0m0.271s 00:05:13.830 user 0m0.173s 00:05:13.830 sys 0m0.035s 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:13.830 18:00:06 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:13.830 ************************************ 00:05:13.830 END TEST rpc_daemon_integrity 00:05:13.830 ************************************ 00:05:13.830 18:00:06 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:13.830 18:00:06 rpc -- rpc/rpc.sh@84 -- # killprocess 3227057 00:05:13.830 18:00:06 rpc -- common/autotest_common.sh@950 -- # '[' -z 3227057 ']' 00:05:13.830 18:00:06 rpc -- common/autotest_common.sh@954 -- # kill -0 3227057 00:05:13.830 18:00:06 rpc -- common/autotest_common.sh@955 -- # uname 00:05:13.830 18:00:06 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.830 18:00:06 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3227057 00:05:14.088 18:00:06 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.088 18:00:06 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.088 18:00:06 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3227057' 00:05:14.088 killing process with pid 3227057 00:05:14.088 18:00:06 rpc -- common/autotest_common.sh@969 -- # kill 3227057 00:05:14.088 18:00:06 rpc -- common/autotest_common.sh@974 -- # wait 3227057 00:05:14.347 00:05:14.347 real 0m2.456s 00:05:14.347 user 0m3.190s 00:05:14.347 sys 0m0.649s 00:05:14.347 18:00:07 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:14.347 18:00:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 ************************************ 00:05:14.347 END TEST rpc 00:05:14.347 ************************************ 00:05:14.347 18:00:07 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.347 18:00:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.347 18:00:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.347 18:00:07 -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 ************************************ 00:05:14.347 START TEST skip_rpc 00:05:14.347 ************************************ 00:05:14.347 18:00:07 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:14.347 * Looking for test storage... 00:05:14.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:14.347 18:00:07 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:14.347 18:00:07 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.347 18:00:07 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:14.347 18:00:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:14.347 18:00:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:14.347 18:00:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 ************************************ 00:05:14.347 START TEST skip_rpc 00:05:14.347 ************************************ 00:05:14.347 18:00:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:14.347 18:00:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3227695 00:05:14.347 18:00:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:14.347 18:00:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.347 18:00:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:14.605 [2024-07-24 18:00:07.477337] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:14.605 [2024-07-24 18:00:07.477376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3227695 ] 00:05:14.605 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.605 [2024-07-24 18:00:07.530486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.605 [2024-07-24 18:00:07.603984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3227695 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 3227695 ']' 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 3227695 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3227695 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3227695' 00:05:19.872 killing process with pid 3227695 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 3227695 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 3227695 00:05:19.872 00:05:19.872 real 0m5.370s 00:05:19.872 user 0m5.151s 00:05:19.872 sys 0m0.251s 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:19.872 18:00:12 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.872 ************************************ 00:05:19.872 END TEST skip_rpc 00:05:19.872 ************************************ 00:05:19.872 18:00:12 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.872 18:00:12 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.872 18:00:12 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.872 18:00:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.872 ************************************ 00:05:19.872 START TEST skip_rpc_with_json 00:05:19.872 ************************************ 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3229027 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3229027 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 3229027 ']' 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.872 18:00:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.872 [2024-07-24 18:00:12.915666] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:19.872 [2024-07-24 18:00:12.915707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229027 ] 00:05:19.872 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.132 [2024-07-24 18:00:12.969613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.132 [2024-07-24 18:00:13.048717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.698 [2024-07-24 18:00:13.708846] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.698 request: 00:05:20.698 { 00:05:20.698 "trtype": "tcp", 00:05:20.698 "method": "nvmf_get_transports", 00:05:20.698 "req_id": 1 00:05:20.698 } 00:05:20.698 Got JSON-RPC error response 00:05:20.698 response: 00:05:20.698 { 00:05:20.698 "code": -19, 00:05:20.698 "message": "No such device" 00:05:20.698 } 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.698 [2024-07-24 18:00:13.716937] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.698 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.956 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.956 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:20.956 { 00:05:20.956 "subsystems": [ 00:05:20.956 { 00:05:20.956 "subsystem": "vfio_user_target", 00:05:20.956 "config": null 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "subsystem": "keyring", 00:05:20.956 "config": [] 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "subsystem": "iobuf", 00:05:20.956 "config": [ 00:05:20.956 { 00:05:20.956 "method": "iobuf_set_options", 00:05:20.956 "params": { 00:05:20.956 "small_pool_count": 8192, 00:05:20.956 "large_pool_count": 1024, 00:05:20.956 "small_bufsize": 8192, 00:05:20.956 "large_bufsize": 135168 00:05:20.956 } 00:05:20.956 } 00:05:20.956 ] 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "subsystem": "sock", 00:05:20.956 "config": [ 00:05:20.956 { 00:05:20.956 "method": "sock_set_default_impl", 00:05:20.956 "params": { 00:05:20.956 "impl_name": "posix" 00:05:20.956 } 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "method": "sock_impl_set_options", 00:05:20.956 "params": { 00:05:20.956 "impl_name": "ssl", 00:05:20.956 "recv_buf_size": 4096, 00:05:20.956 "send_buf_size": 4096, 00:05:20.956 "enable_recv_pipe": true, 00:05:20.956 "enable_quickack": false, 00:05:20.956 "enable_placement_id": 0, 00:05:20.956 "enable_zerocopy_send_server": true, 00:05:20.956 "enable_zerocopy_send_client": false, 00:05:20.956 "zerocopy_threshold": 0, 00:05:20.956 "tls_version": 0, 00:05:20.956 "enable_ktls": false 00:05:20.956 } 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "method": "sock_impl_set_options", 00:05:20.956 "params": { 00:05:20.956 "impl_name": "posix", 00:05:20.956 "recv_buf_size": 2097152, 00:05:20.956 "send_buf_size": 2097152, 00:05:20.956 "enable_recv_pipe": true, 00:05:20.956 "enable_quickack": false, 00:05:20.956 "enable_placement_id": 0, 00:05:20.956 "enable_zerocopy_send_server": true, 00:05:20.956 "enable_zerocopy_send_client": false, 00:05:20.956 "zerocopy_threshold": 0, 00:05:20.956 "tls_version": 0, 00:05:20.956 "enable_ktls": false 00:05:20.956 } 00:05:20.956 } 00:05:20.956 ] 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "subsystem": "vmd", 00:05:20.956 "config": [] 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "subsystem": "accel", 00:05:20.956 "config": [ 00:05:20.956 { 00:05:20.956 "method": "accel_set_options", 00:05:20.956 "params": { 00:05:20.956 "small_cache_size": 128, 00:05:20.956 "large_cache_size": 16, 00:05:20.956 "task_count": 2048, 00:05:20.956 "sequence_count": 2048, 00:05:20.956 "buf_count": 2048 00:05:20.956 } 00:05:20.956 } 00:05:20.956 ] 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "subsystem": "bdev", 00:05:20.956 "config": [ 00:05:20.956 { 00:05:20.956 "method": "bdev_set_options", 00:05:20.956 "params": { 00:05:20.956 "bdev_io_pool_size": 65535, 00:05:20.956 "bdev_io_cache_size": 256, 00:05:20.956 "bdev_auto_examine": true, 00:05:20.956 "iobuf_small_cache_size": 128, 00:05:20.956 "iobuf_large_cache_size": 16 00:05:20.956 } 00:05:20.956 }, 00:05:20.956 { 00:05:20.956 "method": "bdev_raid_set_options", 00:05:20.956 "params": { 00:05:20.956 "process_window_size_kb": 1024, 00:05:20.956 "process_max_bandwidth_mb_sec": 0 00:05:20.956 } 00:05:20.956 }, 00:05:20.957 { 00:05:20.957 "method": "bdev_iscsi_set_options", 00:05:20.957 "params": { 00:05:20.957 "timeout_sec": 30 00:05:20.957 } 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "method": "bdev_nvme_set_options", 00:05:20.957 "params": { 00:05:20.957 "action_on_timeout": "none", 00:05:20.957 "timeout_us": 0, 00:05:20.957 "timeout_admin_us": 0, 00:05:20.957 "keep_alive_timeout_ms": 10000, 00:05:20.957 "arbitration_burst": 0, 00:05:20.957 "low_priority_weight": 0, 00:05:20.957 "medium_priority_weight": 0, 00:05:20.957 "high_priority_weight": 0, 00:05:20.957 "nvme_adminq_poll_period_us": 10000, 00:05:20.957 "nvme_ioq_poll_period_us": 0, 00:05:20.957 "io_queue_requests": 0, 00:05:20.957 "delay_cmd_submit": true, 00:05:20.957 "transport_retry_count": 4, 00:05:20.957 "bdev_retry_count": 3, 00:05:20.957 "transport_ack_timeout": 0, 00:05:20.957 "ctrlr_loss_timeout_sec": 0, 00:05:20.957 "reconnect_delay_sec": 0, 00:05:20.957 "fast_io_fail_timeout_sec": 0, 00:05:20.957 "disable_auto_failback": false, 00:05:20.957 "generate_uuids": false, 00:05:20.957 "transport_tos": 0, 00:05:20.957 "nvme_error_stat": false, 00:05:20.957 "rdma_srq_size": 0, 00:05:20.957 "io_path_stat": false, 00:05:20.957 "allow_accel_sequence": false, 00:05:20.957 "rdma_max_cq_size": 0, 00:05:20.957 "rdma_cm_event_timeout_ms": 0, 00:05:20.957 "dhchap_digests": [ 00:05:20.957 "sha256", 00:05:20.957 "sha384", 00:05:20.957 "sha512" 00:05:20.957 ], 00:05:20.957 "dhchap_dhgroups": [ 00:05:20.957 "null", 00:05:20.957 "ffdhe2048", 00:05:20.957 "ffdhe3072", 00:05:20.957 "ffdhe4096", 00:05:20.957 "ffdhe6144", 00:05:20.957 "ffdhe8192" 00:05:20.957 ] 00:05:20.957 } 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "method": "bdev_nvme_set_hotplug", 00:05:20.957 "params": { 00:05:20.957 "period_us": 100000, 00:05:20.957 "enable": false 00:05:20.957 } 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "method": "bdev_wait_for_examine" 00:05:20.957 } 00:05:20.957 ] 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "scsi", 00:05:20.957 "config": null 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "scheduler", 00:05:20.957 "config": [ 00:05:20.957 { 00:05:20.957 "method": "framework_set_scheduler", 00:05:20.957 "params": { 00:05:20.957 "name": "static" 00:05:20.957 } 00:05:20.957 } 00:05:20.957 ] 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "vhost_scsi", 00:05:20.957 "config": [] 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "vhost_blk", 00:05:20.957 "config": [] 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "ublk", 00:05:20.957 "config": [] 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "nbd", 00:05:20.957 "config": [] 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "nvmf", 00:05:20.957 "config": [ 00:05:20.957 { 00:05:20.957 "method": "nvmf_set_config", 00:05:20.957 "params": { 00:05:20.957 "discovery_filter": "match_any", 00:05:20.957 "admin_cmd_passthru": { 00:05:20.957 "identify_ctrlr": false 00:05:20.957 } 00:05:20.957 } 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "method": "nvmf_set_max_subsystems", 00:05:20.957 "params": { 00:05:20.957 "max_subsystems": 1024 00:05:20.957 } 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "method": "nvmf_set_crdt", 00:05:20.957 "params": { 00:05:20.957 "crdt1": 0, 00:05:20.957 "crdt2": 0, 00:05:20.957 "crdt3": 0 00:05:20.957 } 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "method": "nvmf_create_transport", 00:05:20.957 "params": { 00:05:20.957 "trtype": "TCP", 00:05:20.957 "max_queue_depth": 128, 00:05:20.957 "max_io_qpairs_per_ctrlr": 127, 00:05:20.957 "in_capsule_data_size": 4096, 00:05:20.957 "max_io_size": 131072, 00:05:20.957 "io_unit_size": 131072, 00:05:20.957 "max_aq_depth": 128, 00:05:20.957 "num_shared_buffers": 511, 00:05:20.957 "buf_cache_size": 4294967295, 00:05:20.957 "dif_insert_or_strip": false, 00:05:20.957 "zcopy": false, 00:05:20.957 "c2h_success": true, 00:05:20.957 "sock_priority": 0, 00:05:20.957 "abort_timeout_sec": 1, 00:05:20.957 "ack_timeout": 0, 00:05:20.957 "data_wr_pool_size": 0 00:05:20.957 } 00:05:20.957 } 00:05:20.957 ] 00:05:20.957 }, 00:05:20.957 { 00:05:20.957 "subsystem": "iscsi", 00:05:20.957 "config": [ 00:05:20.957 { 00:05:20.957 "method": "iscsi_set_options", 00:05:20.957 "params": { 00:05:20.957 "node_base": "iqn.2016-06.io.spdk", 00:05:20.957 "max_sessions": 128, 00:05:20.957 "max_connections_per_session": 2, 00:05:20.957 "max_queue_depth": 64, 00:05:20.957 "default_time2wait": 2, 00:05:20.957 "default_time2retain": 20, 00:05:20.957 "first_burst_length": 8192, 00:05:20.957 "immediate_data": true, 00:05:20.957 "allow_duplicated_isid": false, 00:05:20.957 "error_recovery_level": 0, 00:05:20.957 "nop_timeout": 60, 00:05:20.957 "nop_in_interval": 30, 00:05:20.957 "disable_chap": false, 00:05:20.957 "require_chap": false, 00:05:20.957 "mutual_chap": false, 00:05:20.957 "chap_group": 0, 00:05:20.957 "max_large_datain_per_connection": 64, 00:05:20.957 "max_r2t_per_connection": 4, 00:05:20.957 "pdu_pool_size": 36864, 00:05:20.957 "immediate_data_pool_size": 16384, 00:05:20.958 "data_out_pool_size": 2048 00:05:20.958 } 00:05:20.958 } 00:05:20.958 ] 00:05:20.958 } 00:05:20.958 ] 00:05:20.958 } 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3229027 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3229027 ']' 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3229027 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3229027 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3229027' 00:05:20.958 killing process with pid 3229027 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3229027 00:05:20.958 18:00:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3229027 00:05:21.216 18:00:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3229265 00:05:21.216 18:00:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:21.216 18:00:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3229265 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 3229265 ']' 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 3229265 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3229265 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3229265' 00:05:26.508 killing process with pid 3229265 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 3229265 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 3229265 00:05:26.508 18:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.767 18:00:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:26.767 00:05:26.767 real 0m6.732s 00:05:26.767 user 0m6.580s 00:05:26.767 sys 0m0.567s 00:05:26.767 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.767 18:00:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:26.767 ************************************ 00:05:26.768 END TEST skip_rpc_with_json 00:05:26.768 ************************************ 00:05:26.768 18:00:19 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:26.768 18:00:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.768 18:00:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.768 18:00:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.768 ************************************ 00:05:26.768 START TEST skip_rpc_with_delay 00:05:26.768 ************************************ 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:26.768 [2024-07-24 18:00:19.723262] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:26.768 [2024-07-24 18:00:19.723324] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.768 00:05:26.768 real 0m0.069s 00:05:26.768 user 0m0.040s 00:05:26.768 sys 0m0.028s 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.768 18:00:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:26.768 ************************************ 00:05:26.768 END TEST skip_rpc_with_delay 00:05:26.768 ************************************ 00:05:26.768 18:00:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:26.768 18:00:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:26.768 18:00:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:26.768 18:00:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.768 18:00:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.768 18:00:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.768 ************************************ 00:05:26.768 START TEST exit_on_failed_rpc_init 00:05:26.768 ************************************ 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3230236 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3230236 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 3230236 ']' 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.768 18:00:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.027 [2024-07-24 18:00:19.858549] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:27.027 [2024-07-24 18:00:19.858589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230236 ] 00:05:27.027 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.027 [2024-07-24 18:00:19.914440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.027 [2024-07-24 18:00:19.989048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:27.594 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:27.853 [2024-07-24 18:00:20.723041] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:27.853 [2024-07-24 18:00:20.723089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230395 ] 00:05:27.853 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.853 [2024-07-24 18:00:20.778171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.853 [2024-07-24 18:00:20.850433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.853 [2024-07-24 18:00:20.850508] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:27.853 [2024-07-24 18:00:20.850517] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:27.853 [2024-07-24 18:00:20.850522] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3230236 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 3230236 ']' 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 3230236 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:27.853 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3230236 00:05:28.111 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:28.111 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:28.111 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3230236' 00:05:28.111 killing process with pid 3230236 00:05:28.111 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 3230236 00:05:28.111 18:00:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 3230236 00:05:28.369 00:05:28.369 real 0m1.471s 00:05:28.369 user 0m1.697s 00:05:28.369 sys 0m0.408s 00:05:28.369 18:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.369 18:00:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:28.369 ************************************ 00:05:28.369 END TEST exit_on_failed_rpc_init 00:05:28.369 ************************************ 00:05:28.369 18:00:21 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.369 00:05:28.369 real 0m14.018s 00:05:28.369 user 0m13.609s 00:05:28.369 sys 0m1.517s 00:05:28.369 18:00:21 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.369 18:00:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.369 ************************************ 00:05:28.369 END TEST skip_rpc 00:05:28.369 ************************************ 00:05:28.369 18:00:21 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.369 18:00:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.369 18:00:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.369 18:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.369 ************************************ 00:05:28.369 START TEST rpc_client 00:05:28.369 ************************************ 00:05:28.369 18:00:21 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:28.628 * Looking for test storage... 00:05:28.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:28.628 18:00:21 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:28.628 OK 00:05:28.628 18:00:21 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:28.628 00:05:28.628 real 0m0.113s 00:05:28.628 user 0m0.053s 00:05:28.628 sys 0m0.068s 00:05:28.628 18:00:21 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.628 18:00:21 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:28.628 ************************************ 00:05:28.628 END TEST rpc_client 00:05:28.628 ************************************ 00:05:28.628 18:00:21 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.628 18:00:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:28.628 18:00:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.628 18:00:21 -- common/autotest_common.sh@10 -- # set +x 00:05:28.628 ************************************ 00:05:28.628 START TEST json_config 00:05:28.628 ************************************ 00:05:28.628 18:00:21 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:28.628 18:00:21 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.628 18:00:21 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:28.628 18:00:21 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.628 18:00:21 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.628 18:00:21 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.629 18:00:21 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.629 18:00:21 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.629 18:00:21 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.629 18:00:21 json_config -- paths/export.sh@5 -- # export PATH 00:05:28.629 18:00:21 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@47 -- # : 0 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:28.629 18:00:21 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:28.629 INFO: JSON configuration test init 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.629 18:00:21 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:28.629 18:00:21 json_config -- json_config/common.sh@9 -- # local app=target 00:05:28.629 18:00:21 json_config -- json_config/common.sh@10 -- # shift 00:05:28.629 18:00:21 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.629 18:00:21 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.629 18:00:21 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.629 18:00:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.629 18:00:21 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.629 18:00:21 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3230594 00:05:28.629 18:00:21 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.629 Waiting for target to run... 00:05:28.629 18:00:21 json_config -- json_config/common.sh@25 -- # waitforlisten 3230594 /var/tmp/spdk_tgt.sock 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@831 -- # '[' -z 3230594 ']' 00:05:28.629 18:00:21 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.629 18:00:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.888 [2024-07-24 18:00:21.726569] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:28.888 [2024-07-24 18:00:21.726617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3230594 ] 00:05:28.888 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.146 [2024-07-24 18:00:22.162754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.405 [2024-07-24 18:00:22.254662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.662 18:00:22 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.662 18:00:22 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:29.662 18:00:22 json_config -- json_config/common.sh@26 -- # echo '' 00:05:29.662 00:05:29.662 18:00:22 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:29.662 18:00:22 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:29.662 18:00:22 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:29.662 18:00:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.662 18:00:22 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:29.662 18:00:22 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:29.662 18:00:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:29.662 18:00:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.662 18:00:22 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:29.662 18:00:22 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:29.662 18:00:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:32.940 18:00:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.940 18:00:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:32.940 18:00:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@51 -- # sort 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:32.940 18:00:25 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.940 18:00:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:32.940 18:00:25 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.940 18:00:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:32.940 18:00:25 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.940 18:00:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:32.940 MallocForNvmf0 00:05:32.940 18:00:26 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:32.940 18:00:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:33.198 MallocForNvmf1 00:05:33.198 18:00:26 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.198 18:00:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:33.455 [2024-07-24 18:00:26.349376] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:33.455 18:00:26 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.455 18:00:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:33.711 18:00:26 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.711 18:00:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:33.711 18:00:26 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.711 18:00:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:33.968 18:00:26 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.968 18:00:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:33.968 [2024-07-24 18:00:27.007571] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:33.968 18:00:27 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:33.968 18:00:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:33.968 18:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.224 18:00:27 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:34.224 18:00:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.224 18:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.224 18:00:27 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:34.224 18:00:27 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.224 18:00:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:34.224 MallocBdevForConfigChangeCheck 00:05:34.224 18:00:27 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:34.224 18:00:27 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.224 18:00:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:34.224 18:00:27 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:34.224 18:00:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:34.788 18:00:27 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:34.788 INFO: shutting down applications... 00:05:34.788 18:00:27 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:34.788 18:00:27 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:34.788 18:00:27 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:34.788 18:00:27 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:36.690 Calling clear_iscsi_subsystem 00:05:36.690 Calling clear_nvmf_subsystem 00:05:36.690 Calling clear_nbd_subsystem 00:05:36.690 Calling clear_ublk_subsystem 00:05:36.690 Calling clear_vhost_blk_subsystem 00:05:36.690 Calling clear_vhost_scsi_subsystem 00:05:36.690 Calling clear_bdev_subsystem 00:05:36.690 18:00:29 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:36.690 18:00:29 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:36.690 18:00:29 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:36.690 18:00:29 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:36.690 18:00:29 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:36.690 18:00:29 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:37.256 18:00:30 json_config -- json_config/json_config.sh@349 -- # break 00:05:37.256 18:00:30 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:37.256 18:00:30 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:37.256 18:00:30 json_config -- json_config/common.sh@31 -- # local app=target 00:05:37.256 18:00:30 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:37.256 18:00:30 json_config -- json_config/common.sh@35 -- # [[ -n 3230594 ]] 00:05:37.256 18:00:30 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3230594 00:05:37.256 18:00:30 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:37.256 18:00:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.256 18:00:30 json_config -- json_config/common.sh@41 -- # kill -0 3230594 00:05:37.256 18:00:30 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.514 18:00:30 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.514 18:00:30 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.514 18:00:30 json_config -- json_config/common.sh@41 -- # kill -0 3230594 00:05:37.514 18:00:30 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:37.514 18:00:30 json_config -- json_config/common.sh@43 -- # break 00:05:37.514 18:00:30 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:37.514 18:00:30 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:37.514 SPDK target shutdown done 00:05:37.514 18:00:30 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:37.514 INFO: relaunching applications... 00:05:37.514 18:00:30 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.514 18:00:30 json_config -- json_config/common.sh@9 -- # local app=target 00:05:37.514 18:00:30 json_config -- json_config/common.sh@10 -- # shift 00:05:37.514 18:00:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:37.514 18:00:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:37.514 18:00:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:37.514 18:00:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.514 18:00:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:37.514 18:00:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3232336 00:05:37.514 18:00:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:37.514 Waiting for target to run... 00:05:37.514 18:00:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.514 18:00:30 json_config -- json_config/common.sh@25 -- # waitforlisten 3232336 /var/tmp/spdk_tgt.sock 00:05:37.514 18:00:30 json_config -- common/autotest_common.sh@831 -- # '[' -z 3232336 ']' 00:05:37.514 18:00:30 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:37.514 18:00:30 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.514 18:00:30 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:37.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:37.514 18:00:30 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.514 18:00:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.772 [2024-07-24 18:00:30.638362] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:37.772 [2024-07-24 18:00:30.638419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232336 ] 00:05:37.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.029 [2024-07-24 18:00:31.075057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.286 [2024-07-24 18:00:31.162398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.666 [2024-07-24 18:00:34.179533] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:41.666 [2024-07-24 18:00:34.211848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:41.923 18:00:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:41.923 18:00:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:41.923 18:00:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:41.923 00:05:41.923 18:00:34 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:41.923 18:00:34 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:41.923 INFO: Checking if target configuration is the same... 00:05:41.923 18:00:34 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.923 18:00:34 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:41.923 18:00:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:41.923 + '[' 2 -ne 2 ']' 00:05:41.923 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:41.923 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:41.923 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:41.923 +++ basename /dev/fd/62 00:05:41.923 ++ mktemp /tmp/62.XXX 00:05:41.923 + tmp_file_1=/tmp/62.2ti 00:05:41.923 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.923 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:41.923 + tmp_file_2=/tmp/spdk_tgt_config.json.93r 00:05:41.923 + ret=0 00:05:41.923 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.180 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.180 + diff -u /tmp/62.2ti /tmp/spdk_tgt_config.json.93r 00:05:42.180 + echo 'INFO: JSON config files are the same' 00:05:42.180 INFO: JSON config files are the same 00:05:42.180 + rm /tmp/62.2ti /tmp/spdk_tgt_config.json.93r 00:05:42.180 + exit 0 00:05:42.180 18:00:35 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:42.180 18:00:35 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:42.180 INFO: changing configuration and checking if this can be detected... 00:05:42.180 18:00:35 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.180 18:00:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:42.436 18:00:35 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.436 18:00:35 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:42.436 18:00:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:42.436 + '[' 2 -ne 2 ']' 00:05:42.436 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:42.436 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:42.436 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:42.436 +++ basename /dev/fd/62 00:05:42.436 ++ mktemp /tmp/62.XXX 00:05:42.436 + tmp_file_1=/tmp/62.zBX 00:05:42.437 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:42.437 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:42.437 + tmp_file_2=/tmp/spdk_tgt_config.json.DS1 00:05:42.437 + ret=0 00:05:42.437 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.695 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:42.695 + diff -u /tmp/62.zBX /tmp/spdk_tgt_config.json.DS1 00:05:42.695 + ret=1 00:05:42.695 + echo '=== Start of file: /tmp/62.zBX ===' 00:05:42.695 + cat /tmp/62.zBX 00:05:42.695 + echo '=== End of file: /tmp/62.zBX ===' 00:05:42.695 + echo '' 00:05:42.695 + echo '=== Start of file: /tmp/spdk_tgt_config.json.DS1 ===' 00:05:42.695 + cat /tmp/spdk_tgt_config.json.DS1 00:05:42.695 + echo '=== End of file: /tmp/spdk_tgt_config.json.DS1 ===' 00:05:42.695 + echo '' 00:05:42.695 + rm /tmp/62.zBX /tmp/spdk_tgt_config.json.DS1 00:05:42.695 + exit 1 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:42.695 INFO: configuration change detected. 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@321 -- # [[ -n 3232336 ]] 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.695 18:00:35 json_config -- json_config/json_config.sh@327 -- # killprocess 3232336 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@950 -- # '[' -z 3232336 ']' 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@954 -- # kill -0 3232336 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@955 -- # uname 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3232336 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3232336' 00:05:42.695 killing process with pid 3232336 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@969 -- # kill 3232336 00:05:42.695 18:00:35 json_config -- common/autotest_common.sh@974 -- # wait 3232336 00:05:45.227 18:00:37 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:45.227 18:00:37 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:45.227 18:00:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:45.227 18:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.227 18:00:37 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:45.227 18:00:37 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:45.227 INFO: Success 00:05:45.227 00:05:45.227 real 0m16.320s 00:05:45.227 user 0m16.834s 00:05:45.227 sys 0m2.049s 00:05:45.227 18:00:37 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.227 18:00:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.227 ************************************ 00:05:45.227 END TEST json_config 00:05:45.227 ************************************ 00:05:45.227 18:00:37 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:45.227 18:00:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.227 18:00:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.227 18:00:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.227 ************************************ 00:05:45.227 START TEST json_config_extra_key 00:05:45.227 ************************************ 00:05:45.227 18:00:37 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:45.227 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:45.227 18:00:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.227 18:00:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.227 18:00:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.227 18:00:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.227 18:00:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.227 18:00:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.227 18:00:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:45.227 18:00:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:45.227 18:00:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:45.228 18:00:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:45.228 18:00:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:45.228 18:00:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:45.228 18:00:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:45.228 18:00:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:45.228 INFO: launching applications... 00:05:45.228 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3233614 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:45.228 Waiting for target to run... 00:05:45.228 18:00:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3233614 /var/tmp/spdk_tgt.sock 00:05:45.228 18:00:38 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 3233614 ']' 00:05:45.228 18:00:38 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:45.228 18:00:38 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:45.228 18:00:38 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:45.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:45.228 18:00:38 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:45.228 18:00:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:45.228 [2024-07-24 18:00:38.069087] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:45.228 [2024-07-24 18:00:38.069138] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3233614 ] 00:05:45.228 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.490 [2024-07-24 18:00:38.331019] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.490 [2024-07-24 18:00:38.400431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.056 18:00:38 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.056 18:00:38 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:46.056 00:05:46.056 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:46.056 INFO: shutting down applications... 00:05:46.056 18:00:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3233614 ]] 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3233614 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3233614 00:05:46.056 18:00:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.314 18:00:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.314 18:00:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.314 18:00:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3233614 00:05:46.314 18:00:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.314 18:00:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:46.314 18:00:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.314 18:00:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.314 SPDK target shutdown done 00:05:46.314 18:00:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:46.314 Success 00:05:46.314 00:05:46.314 real 0m1.428s 00:05:46.314 user 0m1.231s 00:05:46.314 sys 0m0.340s 00:05:46.314 18:00:39 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.314 18:00:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.314 ************************************ 00:05:46.314 END TEST json_config_extra_key 00:05:46.314 ************************************ 00:05:46.572 18:00:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:46.572 18:00:39 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.572 18:00:39 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.572 18:00:39 -- common/autotest_common.sh@10 -- # set +x 00:05:46.572 ************************************ 00:05:46.572 START TEST alias_rpc 00:05:46.572 ************************************ 00:05:46.572 18:00:39 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:46.572 * Looking for test storage... 00:05:46.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:46.572 18:00:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:46.572 18:00:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3233916 00:05:46.572 18:00:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.572 18:00:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3233916 00:05:46.572 18:00:39 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 3233916 ']' 00:05:46.572 18:00:39 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.572 18:00:39 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.572 18:00:39 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.572 18:00:39 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.572 18:00:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.572 [2024-07-24 18:00:39.570339] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:46.572 [2024-07-24 18:00:39.570383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3233916 ] 00:05:46.572 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.572 [2024-07-24 18:00:39.621311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.830 [2024-07-24 18:00:39.695781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.395 18:00:40 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.395 18:00:40 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:47.395 18:00:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:47.651 18:00:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3233916 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 3233916 ']' 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 3233916 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3233916 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3233916' 00:05:47.651 killing process with pid 3233916 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@969 -- # kill 3233916 00:05:47.651 18:00:40 alias_rpc -- common/autotest_common.sh@974 -- # wait 3233916 00:05:47.909 00:05:47.909 real 0m1.474s 00:05:47.909 user 0m1.616s 00:05:47.909 sys 0m0.387s 00:05:47.909 18:00:40 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.909 18:00:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.909 ************************************ 00:05:47.909 END TEST alias_rpc 00:05:47.909 ************************************ 00:05:47.909 18:00:40 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:47.909 18:00:40 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:47.909 18:00:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:47.909 18:00:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.909 18:00:40 -- common/autotest_common.sh@10 -- # set +x 00:05:47.909 ************************************ 00:05:47.909 START TEST spdkcli_tcp 00:05:47.909 ************************************ 00:05:47.909 18:00:40 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:48.167 * Looking for test storage... 00:05:48.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3234335 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3234335 00:05:48.167 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 3234335 ']' 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.167 18:00:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:48.167 [2024-07-24 18:00:41.128743] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:48.167 [2024-07-24 18:00:41.128792] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234335 ] 00:05:48.167 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.167 [2024-07-24 18:00:41.185120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.424 [2024-07-24 18:00:41.259036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.424 [2024-07-24 18:00:41.259039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.988 18:00:41 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.988 18:00:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:48.989 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:48.989 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3234418 00:05:48.989 18:00:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:48.989 [ 00:05:48.989 "bdev_malloc_delete", 00:05:48.989 "bdev_malloc_create", 00:05:48.989 "bdev_null_resize", 00:05:48.989 "bdev_null_delete", 00:05:48.989 "bdev_null_create", 00:05:48.989 "bdev_nvme_cuse_unregister", 00:05:48.989 "bdev_nvme_cuse_register", 00:05:48.989 "bdev_opal_new_user", 00:05:48.989 "bdev_opal_set_lock_state", 00:05:48.989 "bdev_opal_delete", 00:05:48.989 "bdev_opal_get_info", 00:05:48.989 "bdev_opal_create", 00:05:48.989 "bdev_nvme_opal_revert", 00:05:48.989 "bdev_nvme_opal_init", 00:05:48.989 "bdev_nvme_send_cmd", 00:05:48.989 "bdev_nvme_get_path_iostat", 00:05:48.989 "bdev_nvme_get_mdns_discovery_info", 00:05:48.989 "bdev_nvme_stop_mdns_discovery", 00:05:48.989 "bdev_nvme_start_mdns_discovery", 00:05:48.989 "bdev_nvme_set_multipath_policy", 00:05:48.989 "bdev_nvme_set_preferred_path", 00:05:48.989 "bdev_nvme_get_io_paths", 00:05:48.989 "bdev_nvme_remove_error_injection", 00:05:48.989 "bdev_nvme_add_error_injection", 00:05:48.989 "bdev_nvme_get_discovery_info", 00:05:48.989 "bdev_nvme_stop_discovery", 00:05:48.989 "bdev_nvme_start_discovery", 00:05:48.989 "bdev_nvme_get_controller_health_info", 00:05:48.989 "bdev_nvme_disable_controller", 00:05:48.989 "bdev_nvme_enable_controller", 00:05:48.989 "bdev_nvme_reset_controller", 00:05:48.989 "bdev_nvme_get_transport_statistics", 00:05:48.989 "bdev_nvme_apply_firmware", 00:05:48.989 "bdev_nvme_detach_controller", 00:05:48.989 "bdev_nvme_get_controllers", 00:05:48.989 "bdev_nvme_attach_controller", 00:05:48.989 "bdev_nvme_set_hotplug", 00:05:48.989 "bdev_nvme_set_options", 00:05:48.989 "bdev_passthru_delete", 00:05:48.989 "bdev_passthru_create", 00:05:48.989 "bdev_lvol_set_parent_bdev", 00:05:48.989 "bdev_lvol_set_parent", 00:05:48.989 "bdev_lvol_check_shallow_copy", 00:05:48.989 "bdev_lvol_start_shallow_copy", 00:05:48.989 "bdev_lvol_grow_lvstore", 00:05:48.989 "bdev_lvol_get_lvols", 00:05:48.989 "bdev_lvol_get_lvstores", 00:05:48.989 "bdev_lvol_delete", 00:05:48.989 "bdev_lvol_set_read_only", 00:05:48.989 "bdev_lvol_resize", 00:05:48.989 "bdev_lvol_decouple_parent", 00:05:48.989 "bdev_lvol_inflate", 00:05:48.989 "bdev_lvol_rename", 00:05:48.989 "bdev_lvol_clone_bdev", 00:05:48.989 "bdev_lvol_clone", 00:05:48.989 "bdev_lvol_snapshot", 00:05:48.989 "bdev_lvol_create", 00:05:48.989 "bdev_lvol_delete_lvstore", 00:05:48.989 "bdev_lvol_rename_lvstore", 00:05:48.989 "bdev_lvol_create_lvstore", 00:05:48.989 "bdev_raid_set_options", 00:05:48.989 "bdev_raid_remove_base_bdev", 00:05:48.989 "bdev_raid_add_base_bdev", 00:05:48.989 "bdev_raid_delete", 00:05:48.989 "bdev_raid_create", 00:05:48.989 "bdev_raid_get_bdevs", 00:05:48.989 "bdev_error_inject_error", 00:05:48.989 "bdev_error_delete", 00:05:48.989 "bdev_error_create", 00:05:48.989 "bdev_split_delete", 00:05:48.989 "bdev_split_create", 00:05:48.989 "bdev_delay_delete", 00:05:48.989 "bdev_delay_create", 00:05:48.989 "bdev_delay_update_latency", 00:05:48.989 "bdev_zone_block_delete", 00:05:48.989 "bdev_zone_block_create", 00:05:48.989 "blobfs_create", 00:05:48.989 "blobfs_detect", 00:05:48.989 "blobfs_set_cache_size", 00:05:48.989 "bdev_aio_delete", 00:05:48.989 "bdev_aio_rescan", 00:05:48.989 "bdev_aio_create", 00:05:48.989 "bdev_ftl_set_property", 00:05:48.989 "bdev_ftl_get_properties", 00:05:48.989 "bdev_ftl_get_stats", 00:05:48.989 "bdev_ftl_unmap", 00:05:48.989 "bdev_ftl_unload", 00:05:48.989 "bdev_ftl_delete", 00:05:48.989 "bdev_ftl_load", 00:05:48.989 "bdev_ftl_create", 00:05:48.989 "bdev_virtio_attach_controller", 00:05:48.989 "bdev_virtio_scsi_get_devices", 00:05:48.989 "bdev_virtio_detach_controller", 00:05:48.989 "bdev_virtio_blk_set_hotplug", 00:05:48.989 "bdev_iscsi_delete", 00:05:48.989 "bdev_iscsi_create", 00:05:48.989 "bdev_iscsi_set_options", 00:05:48.989 "accel_error_inject_error", 00:05:48.989 "ioat_scan_accel_module", 00:05:48.989 "dsa_scan_accel_module", 00:05:48.989 "iaa_scan_accel_module", 00:05:48.989 "vfu_virtio_create_scsi_endpoint", 00:05:48.989 "vfu_virtio_scsi_remove_target", 00:05:48.989 "vfu_virtio_scsi_add_target", 00:05:48.989 "vfu_virtio_create_blk_endpoint", 00:05:48.989 "vfu_virtio_delete_endpoint", 00:05:48.989 "keyring_file_remove_key", 00:05:48.989 "keyring_file_add_key", 00:05:48.989 "keyring_linux_set_options", 00:05:48.989 "iscsi_get_histogram", 00:05:48.989 "iscsi_enable_histogram", 00:05:48.989 "iscsi_set_options", 00:05:48.989 "iscsi_get_auth_groups", 00:05:48.989 "iscsi_auth_group_remove_secret", 00:05:48.989 "iscsi_auth_group_add_secret", 00:05:48.989 "iscsi_delete_auth_group", 00:05:48.989 "iscsi_create_auth_group", 00:05:48.989 "iscsi_set_discovery_auth", 00:05:48.989 "iscsi_get_options", 00:05:48.989 "iscsi_target_node_request_logout", 00:05:48.989 "iscsi_target_node_set_redirect", 00:05:48.989 "iscsi_target_node_set_auth", 00:05:48.989 "iscsi_target_node_add_lun", 00:05:48.989 "iscsi_get_stats", 00:05:48.989 "iscsi_get_connections", 00:05:48.989 "iscsi_portal_group_set_auth", 00:05:48.989 "iscsi_start_portal_group", 00:05:48.989 "iscsi_delete_portal_group", 00:05:48.989 "iscsi_create_portal_group", 00:05:48.989 "iscsi_get_portal_groups", 00:05:48.989 "iscsi_delete_target_node", 00:05:48.989 "iscsi_target_node_remove_pg_ig_maps", 00:05:48.989 "iscsi_target_node_add_pg_ig_maps", 00:05:48.989 "iscsi_create_target_node", 00:05:48.989 "iscsi_get_target_nodes", 00:05:48.989 "iscsi_delete_initiator_group", 00:05:48.989 "iscsi_initiator_group_remove_initiators", 00:05:48.989 "iscsi_initiator_group_add_initiators", 00:05:48.989 "iscsi_create_initiator_group", 00:05:48.989 "iscsi_get_initiator_groups", 00:05:48.989 "nvmf_set_crdt", 00:05:48.989 "nvmf_set_config", 00:05:48.989 "nvmf_set_max_subsystems", 00:05:48.989 "nvmf_stop_mdns_prr", 00:05:48.989 "nvmf_publish_mdns_prr", 00:05:48.989 "nvmf_subsystem_get_listeners", 00:05:48.989 "nvmf_subsystem_get_qpairs", 00:05:48.989 "nvmf_subsystem_get_controllers", 00:05:48.989 "nvmf_get_stats", 00:05:48.989 "nvmf_get_transports", 00:05:48.989 "nvmf_create_transport", 00:05:48.989 "nvmf_get_targets", 00:05:48.989 "nvmf_delete_target", 00:05:48.989 "nvmf_create_target", 00:05:48.989 "nvmf_subsystem_allow_any_host", 00:05:48.989 "nvmf_subsystem_remove_host", 00:05:48.989 "nvmf_subsystem_add_host", 00:05:48.989 "nvmf_ns_remove_host", 00:05:48.989 "nvmf_ns_add_host", 00:05:48.989 "nvmf_subsystem_remove_ns", 00:05:48.989 "nvmf_subsystem_add_ns", 00:05:48.989 "nvmf_subsystem_listener_set_ana_state", 00:05:48.989 "nvmf_discovery_get_referrals", 00:05:48.989 "nvmf_discovery_remove_referral", 00:05:48.989 "nvmf_discovery_add_referral", 00:05:48.989 "nvmf_subsystem_remove_listener", 00:05:48.989 "nvmf_subsystem_add_listener", 00:05:48.989 "nvmf_delete_subsystem", 00:05:48.989 "nvmf_create_subsystem", 00:05:48.989 "nvmf_get_subsystems", 00:05:48.989 "env_dpdk_get_mem_stats", 00:05:48.989 "nbd_get_disks", 00:05:48.989 "nbd_stop_disk", 00:05:48.989 "nbd_start_disk", 00:05:48.989 "ublk_recover_disk", 00:05:48.989 "ublk_get_disks", 00:05:48.989 "ublk_stop_disk", 00:05:48.989 "ublk_start_disk", 00:05:48.989 "ublk_destroy_target", 00:05:48.989 "ublk_create_target", 00:05:48.989 "virtio_blk_create_transport", 00:05:48.989 "virtio_blk_get_transports", 00:05:48.989 "vhost_controller_set_coalescing", 00:05:48.989 "vhost_get_controllers", 00:05:48.989 "vhost_delete_controller", 00:05:48.989 "vhost_create_blk_controller", 00:05:48.989 "vhost_scsi_controller_remove_target", 00:05:48.989 "vhost_scsi_controller_add_target", 00:05:48.989 "vhost_start_scsi_controller", 00:05:48.989 "vhost_create_scsi_controller", 00:05:48.989 "thread_set_cpumask", 00:05:48.989 "framework_get_governor", 00:05:48.989 "framework_get_scheduler", 00:05:48.989 "framework_set_scheduler", 00:05:48.989 "framework_get_reactors", 00:05:48.989 "thread_get_io_channels", 00:05:48.989 "thread_get_pollers", 00:05:48.989 "thread_get_stats", 00:05:48.989 "framework_monitor_context_switch", 00:05:48.989 "spdk_kill_instance", 00:05:48.989 "log_enable_timestamps", 00:05:48.989 "log_get_flags", 00:05:48.989 "log_clear_flag", 00:05:48.989 "log_set_flag", 00:05:48.989 "log_get_level", 00:05:48.989 "log_set_level", 00:05:48.989 "log_get_print_level", 00:05:48.989 "log_set_print_level", 00:05:48.989 "framework_enable_cpumask_locks", 00:05:48.989 "framework_disable_cpumask_locks", 00:05:48.989 "framework_wait_init", 00:05:48.989 "framework_start_init", 00:05:48.989 "scsi_get_devices", 00:05:48.989 "bdev_get_histogram", 00:05:48.989 "bdev_enable_histogram", 00:05:48.989 "bdev_set_qos_limit", 00:05:48.989 "bdev_set_qd_sampling_period", 00:05:48.989 "bdev_get_bdevs", 00:05:48.989 "bdev_reset_iostat", 00:05:48.989 "bdev_get_iostat", 00:05:48.989 "bdev_examine", 00:05:48.989 "bdev_wait_for_examine", 00:05:48.989 "bdev_set_options", 00:05:48.989 "notify_get_notifications", 00:05:48.989 "notify_get_types", 00:05:48.990 "accel_get_stats", 00:05:48.990 "accel_set_options", 00:05:48.990 "accel_set_driver", 00:05:48.990 "accel_crypto_key_destroy", 00:05:48.990 "accel_crypto_keys_get", 00:05:48.990 "accel_crypto_key_create", 00:05:48.990 "accel_assign_opc", 00:05:48.990 "accel_get_module_info", 00:05:48.990 "accel_get_opc_assignments", 00:05:48.990 "vmd_rescan", 00:05:48.990 "vmd_remove_device", 00:05:48.990 "vmd_enable", 00:05:48.990 "sock_get_default_impl", 00:05:48.990 "sock_set_default_impl", 00:05:48.990 "sock_impl_set_options", 00:05:48.990 "sock_impl_get_options", 00:05:48.990 "iobuf_get_stats", 00:05:48.990 "iobuf_set_options", 00:05:48.990 "keyring_get_keys", 00:05:48.990 "framework_get_pci_devices", 00:05:48.990 "framework_get_config", 00:05:48.990 "framework_get_subsystems", 00:05:48.990 "vfu_tgt_set_base_path", 00:05:48.990 "trace_get_info", 00:05:48.990 "trace_get_tpoint_group_mask", 00:05:48.990 "trace_disable_tpoint_group", 00:05:48.990 "trace_enable_tpoint_group", 00:05:48.990 "trace_clear_tpoint_mask", 00:05:48.990 "trace_set_tpoint_mask", 00:05:48.990 "spdk_get_version", 00:05:48.990 "rpc_get_methods" 00:05:48.990 ] 00:05:49.248 18:00:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.248 18:00:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:49.248 18:00:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3234335 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 3234335 ']' 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 3234335 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3234335 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3234335' 00:05:49.248 killing process with pid 3234335 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 3234335 00:05:49.248 18:00:42 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 3234335 00:05:49.506 00:05:49.506 real 0m1.488s 00:05:49.506 user 0m2.754s 00:05:49.506 sys 0m0.420s 00:05:49.506 18:00:42 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.506 18:00:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.506 ************************************ 00:05:49.506 END TEST spdkcli_tcp 00:05:49.506 ************************************ 00:05:49.506 18:00:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.506 18:00:42 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.506 18:00:42 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.506 18:00:42 -- common/autotest_common.sh@10 -- # set +x 00:05:49.506 ************************************ 00:05:49.506 START TEST dpdk_mem_utility 00:05:49.506 ************************************ 00:05:49.506 18:00:42 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:49.762 * Looking for test storage... 00:05:49.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:49.762 18:00:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:49.762 18:00:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3234702 00:05:49.762 18:00:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3234702 00:05:49.762 18:00:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.762 18:00:42 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 3234702 ']' 00:05:49.762 18:00:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.762 18:00:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.762 18:00:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.762 18:00:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.762 18:00:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:49.762 [2024-07-24 18:00:42.677064] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:49.762 [2024-07-24 18:00:42.677116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234702 ] 00:05:49.762 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.762 [2024-07-24 18:00:42.732989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.762 [2024-07-24 18:00:42.805749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.693 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.693 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:50.693 18:00:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:50.693 18:00:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:50.693 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:50.693 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.693 { 00:05:50.693 "filename": "/tmp/spdk_mem_dump.txt" 00:05:50.693 } 00:05:50.693 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:50.693 18:00:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:50.693 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:50.693 1 heaps totaling size 814.000000 MiB 00:05:50.693 size: 814.000000 MiB heap id: 0 00:05:50.693 end heaps---------- 00:05:50.693 8 mempools totaling size 598.116089 MiB 00:05:50.693 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:50.693 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:50.693 size: 84.521057 MiB name: bdev_io_3234702 00:05:50.693 size: 51.011292 MiB name: evtpool_3234702 00:05:50.693 size: 50.003479 MiB name: msgpool_3234702 00:05:50.693 size: 21.763794 MiB name: PDU_Pool 00:05:50.693 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:50.693 size: 0.026123 MiB name: Session_Pool 00:05:50.693 end mempools------- 00:05:50.693 6 memzones totaling size 4.142822 MiB 00:05:50.693 size: 1.000366 MiB name: RG_ring_0_3234702 00:05:50.693 size: 1.000366 MiB name: RG_ring_1_3234702 00:05:50.693 size: 1.000366 MiB name: RG_ring_4_3234702 00:05:50.693 size: 1.000366 MiB name: RG_ring_5_3234702 00:05:50.693 size: 0.125366 MiB name: RG_ring_2_3234702 00:05:50.693 size: 0.015991 MiB name: RG_ring_3_3234702 00:05:50.693 end memzones------- 00:05:50.693 18:00:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:50.693 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:50.693 list of free elements. size: 12.519348 MiB 00:05:50.693 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:50.693 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:50.693 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:50.693 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:50.693 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:50.693 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:50.693 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:50.693 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:50.693 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:50.693 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:50.693 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:50.693 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:50.693 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:50.693 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:50.693 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:50.693 list of standard malloc elements. size: 199.218079 MiB 00:05:50.693 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:50.693 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:50.693 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:50.693 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:50.693 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:50.693 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:50.693 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:50.693 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:50.693 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:50.693 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:50.693 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:50.693 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:50.693 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:50.693 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:50.693 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:50.693 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:50.693 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:50.693 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:50.693 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:50.693 list of memzone associated elements. size: 602.262573 MiB 00:05:50.693 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:50.693 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:50.693 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:50.693 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:50.693 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:50.693 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3234702_0 00:05:50.693 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:50.693 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3234702_0 00:05:50.693 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:50.693 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3234702_0 00:05:50.693 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:50.693 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:50.693 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:50.693 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:50.693 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:50.693 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3234702 00:05:50.693 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:50.693 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3234702 00:05:50.693 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:50.693 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3234702 00:05:50.693 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:50.693 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:50.693 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:50.693 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:50.693 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:50.693 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:50.693 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:50.693 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:50.693 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:50.693 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3234702 00:05:50.693 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:50.693 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3234702 00:05:50.693 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:50.693 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3234702 00:05:50.693 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:50.693 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3234702 00:05:50.693 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:50.693 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3234702 00:05:50.693 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:50.693 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:50.693 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:50.693 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:50.693 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:50.693 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:50.693 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:50.693 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3234702 00:05:50.693 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:50.693 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:50.693 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:50.693 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:50.694 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:50.694 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3234702 00:05:50.694 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:50.694 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:50.694 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:50.694 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3234702 00:05:50.694 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:50.694 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3234702 00:05:50.694 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:50.694 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:50.694 18:00:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:50.694 18:00:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3234702 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 3234702 ']' 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 3234702 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3234702 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3234702' 00:05:50.694 killing process with pid 3234702 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 3234702 00:05:50.694 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 3234702 00:05:50.951 00:05:50.951 real 0m1.390s 00:05:50.951 user 0m1.458s 00:05:50.951 sys 0m0.398s 00:05:50.951 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:50.951 18:00:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.951 ************************************ 00:05:50.951 END TEST dpdk_mem_utility 00:05:50.951 ************************************ 00:05:50.951 18:00:43 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:50.951 18:00:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:50.951 18:00:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:50.951 18:00:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.951 ************************************ 00:05:50.951 START TEST event 00:05:50.951 ************************************ 00:05:50.951 18:00:43 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:51.209 * Looking for test storage... 00:05:51.209 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.209 18:00:44 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:51.209 18:00:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.209 18:00:44 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.209 18:00:44 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:51.209 18:00:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.209 18:00:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.209 ************************************ 00:05:51.209 START TEST event_perf 00:05:51.209 ************************************ 00:05:51.209 18:00:44 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.209 Running I/O for 1 seconds...[2024-07-24 18:00:44.122341] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:51.209 [2024-07-24 18:00:44.122394] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3234993 ] 00:05:51.209 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.209 [2024-07-24 18:00:44.171103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.209 [2024-07-24 18:00:44.248378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.209 [2024-07-24 18:00:44.248475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.209 [2024-07-24 18:00:44.248565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.209 [2024-07-24 18:00:44.248567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.584 Running I/O for 1 seconds... 00:05:52.584 lcore 0: 215980 00:05:52.584 lcore 1: 215980 00:05:52.584 lcore 2: 215979 00:05:52.584 lcore 3: 215979 00:05:52.584 done. 00:05:52.584 00:05:52.584 real 0m1.208s 00:05:52.584 user 0m4.143s 00:05:52.584 sys 0m0.062s 00:05:52.584 18:00:45 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:52.584 18:00:45 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.584 ************************************ 00:05:52.584 END TEST event_perf 00:05:52.584 ************************************ 00:05:52.584 18:00:45 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:52.584 18:00:45 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:52.584 18:00:45 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:52.584 18:00:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.584 ************************************ 00:05:52.584 START TEST event_reactor 00:05:52.584 ************************************ 00:05:52.584 18:00:45 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:52.584 [2024-07-24 18:00:45.403947] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:52.584 [2024-07-24 18:00:45.404013] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235249 ] 00:05:52.584 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.584 [2024-07-24 18:00:45.461000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.584 [2024-07-24 18:00:45.532205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.518 test_start 00:05:53.518 oneshot 00:05:53.518 tick 100 00:05:53.518 tick 100 00:05:53.518 tick 250 00:05:53.518 tick 100 00:05:53.518 tick 100 00:05:53.518 tick 100 00:05:53.518 tick 250 00:05:53.518 tick 500 00:05:53.518 tick 100 00:05:53.518 tick 100 00:05:53.518 tick 250 00:05:53.518 tick 100 00:05:53.518 tick 100 00:05:53.518 test_end 00:05:53.518 00:05:53.518 real 0m1.214s 00:05:53.518 user 0m1.139s 00:05:53.518 sys 0m0.072s 00:05:53.518 18:00:46 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:53.518 18:00:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 ************************************ 00:05:53.518 END TEST event_reactor 00:05:53.518 ************************************ 00:05:53.775 18:00:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.775 18:00:46 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:53.775 18:00:46 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.775 18:00:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.775 ************************************ 00:05:53.775 START TEST event_reactor_perf 00:05:53.775 ************************************ 00:05:53.775 18:00:46 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:53.775 [2024-07-24 18:00:46.677113] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:53.775 [2024-07-24 18:00:46.677170] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235498 ] 00:05:53.775 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.775 [2024-07-24 18:00:46.733196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.775 [2024-07-24 18:00:46.805385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.146 test_start 00:05:55.146 test_end 00:05:55.146 Performance: 521145 events per second 00:05:55.146 00:05:55.146 real 0m1.208s 00:05:55.146 user 0m1.134s 00:05:55.146 sys 0m0.071s 00:05:55.146 18:00:47 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:55.146 18:00:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.146 ************************************ 00:05:55.146 END TEST event_reactor_perf 00:05:55.146 ************************************ 00:05:55.146 18:00:47 event -- event/event.sh@49 -- # uname -s 00:05:55.146 18:00:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.146 18:00:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:55.146 18:00:47 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:55.146 18:00:47 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:55.146 18:00:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.146 ************************************ 00:05:55.146 START TEST event_scheduler 00:05:55.146 ************************************ 00:05:55.146 18:00:47 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:55.146 * Looking for test storage... 00:05:55.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:55.146 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:55.146 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:55.146 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3235765 00:05:55.146 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.146 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3235765 00:05:55.146 18:00:48 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 3235765 ']' 00:05:55.146 18:00:48 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.146 18:00:48 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:55.146 18:00:48 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.146 18:00:48 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:55.146 18:00:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.146 [2024-07-24 18:00:48.056956] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:55.146 [2024-07-24 18:00:48.057005] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235765 ] 00:05:55.146 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.146 [2024-07-24 18:00:48.108586] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.146 [2024-07-24 18:00:48.189916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.146 [2024-07-24 18:00:48.190001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.146 [2024-07-24 18:00:48.190089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.146 [2024-07-24 18:00:48.190090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:56.077 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 [2024-07-24 18:00:48.884507] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:56.077 [2024-07-24 18:00:48.884525] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:56.077 [2024-07-24 18:00:48.884534] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.077 [2024-07-24 18:00:48.884539] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.077 [2024-07-24 18:00:48.884544] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 [2024-07-24 18:00:48.954995] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.077 18:00:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 ************************************ 00:05:56.077 START TEST scheduler_create_thread 00:05:56.077 ************************************ 00:05:56.077 18:00:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:56.077 18:00:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:56.077 18:00:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 2 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 3 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 4 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 5 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 6 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 7 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 8 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 9 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 10 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.077 18:00:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.974 18:00:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.974 18:00:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:57.974 18:00:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:57.974 18:00:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.974 18:00:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.540 18:00:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.540 00:05:58.540 real 0m2.621s 00:05:58.540 user 0m0.023s 00:05:58.540 sys 0m0.006s 00:05:58.540 18:00:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:58.540 18:00:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.540 ************************************ 00:05:58.540 END TEST scheduler_create_thread 00:05:58.540 ************************************ 00:05:58.798 18:00:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.798 18:00:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3235765 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 3235765 ']' 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 3235765 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3235765 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3235765' 00:05:58.798 killing process with pid 3235765 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 3235765 00:05:58.798 18:00:51 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 3235765 00:05:59.056 [2024-07-24 18:00:52.089291] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.321 00:05:59.321 real 0m4.336s 00:05:59.321 user 0m8.259s 00:05:59.321 sys 0m0.363s 00:05:59.321 18:00:52 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.321 18:00:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.321 ************************************ 00:05:59.321 END TEST event_scheduler 00:05:59.321 ************************************ 00:05:59.321 18:00:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.321 18:00:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.321 18:00:52 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.321 18:00:52 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.321 18:00:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.321 ************************************ 00:05:59.321 START TEST app_repeat 00:05:59.321 ************************************ 00:05:59.321 18:00:52 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3236512 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3236512' 00:05:59.321 Process app_repeat pid: 3236512 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.321 spdk_app_start Round 0 00:05:59.321 18:00:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3236512 /var/tmp/spdk-nbd.sock 00:05:59.321 18:00:52 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3236512 ']' 00:05:59.321 18:00:52 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.321 18:00:52 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.321 18:00:52 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.321 18:00:52 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.321 18:00:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.321 [2024-07-24 18:00:52.390154] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:05:59.321 [2024-07-24 18:00:52.390209] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3236512 ] 00:05:59.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.582 [2024-07-24 18:00:52.441365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.582 [2024-07-24 18:00:52.523515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.582 [2024-07-24 18:00:52.523519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.148 18:00:53 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.148 18:00:53 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.148 18:00:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.406 Malloc0 00:06:00.406 18:00:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.663 Malloc1 00:06:00.663 18:00:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.663 18:00:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.921 /dev/nbd0 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.921 1+0 records in 00:06:00.921 1+0 records out 00:06:00.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184608 s, 22.2 MB/s 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.921 /dev/nbd1 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.921 1+0 records in 00:06:00.921 1+0 records out 00:06:00.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194728 s, 21.0 MB/s 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.921 18:00:53 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.921 18:00:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.179 { 00:06:01.179 "nbd_device": "/dev/nbd0", 00:06:01.179 "bdev_name": "Malloc0" 00:06:01.179 }, 00:06:01.179 { 00:06:01.179 "nbd_device": "/dev/nbd1", 00:06:01.179 "bdev_name": "Malloc1" 00:06:01.179 } 00:06:01.179 ]' 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.179 { 00:06:01.179 "nbd_device": "/dev/nbd0", 00:06:01.179 "bdev_name": "Malloc0" 00:06:01.179 }, 00:06:01.179 { 00:06:01.179 "nbd_device": "/dev/nbd1", 00:06:01.179 "bdev_name": "Malloc1" 00:06:01.179 } 00:06:01.179 ]' 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.179 /dev/nbd1' 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.179 /dev/nbd1' 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.179 18:00:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.180 256+0 records in 00:06:01.180 256+0 records out 00:06:01.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00998863 s, 105 MB/s 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.180 256+0 records in 00:06:01.180 256+0 records out 00:06:01.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138099 s, 75.9 MB/s 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.180 18:00:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.438 256+0 records in 00:06:01.438 256+0 records out 00:06:01.438 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149061 s, 70.3 MB/s 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.438 18:00:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.696 18:00:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.696 18:00:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.696 18:00:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.696 18:00:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.696 18:00:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.697 18:00:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.697 18:00:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.697 18:00:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.697 18:00:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.697 18:00:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.697 18:00:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.955 18:00:54 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.955 18:00:54 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.277 18:00:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:02.277 [2024-07-24 18:00:55.271769] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:02.549 [2024-07-24 18:00:55.343799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.549 [2024-07-24 18:00:55.343802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.549 [2024-07-24 18:00:55.384788] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:02.549 [2024-07-24 18:00:55.384833] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.077 18:00:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.077 18:00:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.077 spdk_app_start Round 1 00:06:05.077 18:00:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3236512 /var/tmp/spdk-nbd.sock 00:06:05.077 18:00:58 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3236512 ']' 00:06:05.077 18:00:58 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.077 18:00:58 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.077 18:00:58 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.077 18:00:58 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.077 18:00:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.334 18:00:58 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.334 18:00:58 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:05.334 18:00:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.592 Malloc0 00:06:05.592 18:00:58 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.592 Malloc1 00:06:05.592 18:00:58 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.592 18:00:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.849 /dev/nbd0 00:06:05.849 18:00:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.849 18:00:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.849 18:00:58 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:05.849 18:00:58 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:05.849 18:00:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:05.849 18:00:58 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:05.849 18:00:58 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.850 1+0 records in 00:06:05.850 1+0 records out 00:06:05.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174528 s, 23.5 MB/s 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:05.850 18:00:58 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:05.850 18:00:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.850 18:00:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.850 18:00:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.107 /dev/nbd1 00:06:06.107 18:00:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.107 18:00:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.107 1+0 records in 00:06:06.107 1+0 records out 00:06:06.107 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206006 s, 19.9 MB/s 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.107 18:00:59 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.107 18:00:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.107 18:00:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.107 18:00:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.107 18:00:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.107 18:00:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.365 { 00:06:06.365 "nbd_device": "/dev/nbd0", 00:06:06.365 "bdev_name": "Malloc0" 00:06:06.365 }, 00:06:06.365 { 00:06:06.365 "nbd_device": "/dev/nbd1", 00:06:06.365 "bdev_name": "Malloc1" 00:06:06.365 } 00:06:06.365 ]' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.365 { 00:06:06.365 "nbd_device": "/dev/nbd0", 00:06:06.365 "bdev_name": "Malloc0" 00:06:06.365 }, 00:06:06.365 { 00:06:06.365 "nbd_device": "/dev/nbd1", 00:06:06.365 "bdev_name": "Malloc1" 00:06:06.365 } 00:06:06.365 ]' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.365 /dev/nbd1' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.365 /dev/nbd1' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.365 256+0 records in 00:06:06.365 256+0 records out 00:06:06.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104001 s, 101 MB/s 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.365 256+0 records in 00:06:06.365 256+0 records out 00:06:06.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137015 s, 76.5 MB/s 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.365 256+0 records in 00:06:06.365 256+0 records out 00:06:06.365 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143974 s, 72.8 MB/s 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.365 18:00:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.623 18:00:59 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.880 18:00:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.880 18:00:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.138 18:01:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.396 [2024-07-24 18:01:00.325043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.396 [2024-07-24 18:01:00.393501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.396 [2024-07-24 18:01:00.393502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.396 [2024-07-24 18:01:00.435009] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.396 [2024-07-24 18:01:00.435051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.677 18:01:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.677 18:01:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:10.677 spdk_app_start Round 2 00:06:10.677 18:01:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3236512 /var/tmp/spdk-nbd.sock 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3236512 ']' 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.677 18:01:03 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:10.677 18:01:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.677 Malloc0 00:06:10.677 18:01:03 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.677 Malloc1 00:06:10.677 18:01:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.677 18:01:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.936 /dev/nbd0 00:06:10.936 18:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.936 18:01:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.936 1+0 records in 00:06:10.936 1+0 records out 00:06:10.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215678 s, 19.0 MB/s 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:10.936 18:01:03 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:10.936 18:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.936 18:01:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.936 18:01:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.194 /dev/nbd1 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.194 1+0 records in 00:06:11.194 1+0 records out 00:06:11.194 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184764 s, 22.2 MB/s 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:11.194 18:01:04 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:11.194 { 00:06:11.194 "nbd_device": "/dev/nbd0", 00:06:11.194 "bdev_name": "Malloc0" 00:06:11.194 }, 00:06:11.194 { 00:06:11.194 "nbd_device": "/dev/nbd1", 00:06:11.194 "bdev_name": "Malloc1" 00:06:11.194 } 00:06:11.194 ]' 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:11.194 { 00:06:11.194 "nbd_device": "/dev/nbd0", 00:06:11.194 "bdev_name": "Malloc0" 00:06:11.194 }, 00:06:11.194 { 00:06:11.194 "nbd_device": "/dev/nbd1", 00:06:11.194 "bdev_name": "Malloc1" 00:06:11.194 } 00:06:11.194 ]' 00:06:11.194 18:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:11.454 /dev/nbd1' 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:11.454 /dev/nbd1' 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.454 18:01:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.455 256+0 records in 00:06:11.455 256+0 records out 00:06:11.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103356 s, 101 MB/s 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.455 256+0 records in 00:06:11.455 256+0 records out 00:06:11.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138035 s, 76.0 MB/s 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.455 256+0 records in 00:06:11.455 256+0 records out 00:06:11.455 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142714 s, 73.5 MB/s 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.455 18:01:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.713 18:01:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.971 18:01:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.971 18:01:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:12.229 18:01:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.488 [2024-07-24 18:01:05.354300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.488 [2024-07-24 18:01:05.421634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.488 [2024-07-24 18:01:05.421636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.488 [2024-07-24 18:01:05.462260] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.488 [2024-07-24 18:01:05.462299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.773 18:01:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3236512 /var/tmp/spdk-nbd.sock 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 3236512 ']' 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:15.773 18:01:08 event.app_repeat -- event/event.sh@39 -- # killprocess 3236512 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 3236512 ']' 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 3236512 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3236512 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3236512' 00:06:15.773 killing process with pid 3236512 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@969 -- # kill 3236512 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@974 -- # wait 3236512 00:06:15.773 spdk_app_start is called in Round 0. 00:06:15.773 Shutdown signal received, stop current app iteration 00:06:15.773 Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 reinitialization... 00:06:15.773 spdk_app_start is called in Round 1. 00:06:15.773 Shutdown signal received, stop current app iteration 00:06:15.773 Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 reinitialization... 00:06:15.773 spdk_app_start is called in Round 2. 00:06:15.773 Shutdown signal received, stop current app iteration 00:06:15.773 Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 reinitialization... 00:06:15.773 spdk_app_start is called in Round 3. 00:06:15.773 Shutdown signal received, stop current app iteration 00:06:15.773 18:01:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.773 18:01:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.773 00:06:15.773 real 0m16.193s 00:06:15.773 user 0m35.007s 00:06:15.773 sys 0m2.363s 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.773 18:01:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.773 ************************************ 00:06:15.773 END TEST app_repeat 00:06:15.773 ************************************ 00:06:15.773 18:01:08 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.773 18:01:08 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.773 18:01:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.773 18:01:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.773 18:01:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.773 ************************************ 00:06:15.773 START TEST cpu_locks 00:06:15.773 ************************************ 00:06:15.773 18:01:08 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.773 * Looking for test storage... 00:06:15.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.773 18:01:08 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.773 18:01:08 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.773 18:01:08 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.773 18:01:08 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.773 18:01:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.773 18:01:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.773 18:01:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.773 ************************************ 00:06:15.773 START TEST default_locks 00:06:15.773 ************************************ 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3239503 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3239503 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3239503 ']' 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.773 18:01:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.773 [2024-07-24 18:01:08.791631] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:15.773 [2024-07-24 18:01:08.791671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239503 ] 00:06:15.773 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.773 [2024-07-24 18:01:08.846199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.031 [2024-07-24 18:01:08.926462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.597 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:16.597 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:16.597 18:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3239503 00:06:16.597 18:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.597 18:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3239503 00:06:16.855 lslocks: write error 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3239503 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 3239503 ']' 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 3239503 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3239503 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3239503' 00:06:16.855 killing process with pid 3239503 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 3239503 00:06:16.855 18:01:09 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 3239503 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3239503 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3239503 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 3239503 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 3239503 ']' 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3239503) - No such process 00:06:17.422 ERROR: process (pid: 3239503) is no longer running 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.422 00:06:17.422 real 0m1.475s 00:06:17.422 user 0m1.533s 00:06:17.422 sys 0m0.475s 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.422 18:01:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.422 ************************************ 00:06:17.422 END TEST default_locks 00:06:17.422 ************************************ 00:06:17.422 18:01:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:17.422 18:01:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:17.422 18:01:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.422 18:01:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.422 ************************************ 00:06:17.422 START TEST default_locks_via_rpc 00:06:17.422 ************************************ 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3239767 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3239767 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3239767 ']' 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.423 18:01:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.423 [2024-07-24 18:01:10.331791] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:17.423 [2024-07-24 18:01:10.331831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3239767 ] 00:06:17.423 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.423 [2024-07-24 18:01:10.385680] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.423 [2024-07-24 18:01:10.466100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3239767 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3239767 00:06:18.356 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3239767 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 3239767 ']' 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 3239767 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3239767 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3239767' 00:06:18.615 killing process with pid 3239767 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 3239767 00:06:18.615 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 3239767 00:06:18.874 00:06:18.874 real 0m1.573s 00:06:18.874 user 0m1.644s 00:06:18.874 sys 0m0.505s 00:06:18.874 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.874 18:01:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.874 ************************************ 00:06:18.874 END TEST default_locks_via_rpc 00:06:18.874 ************************************ 00:06:18.874 18:01:11 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:18.874 18:01:11 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.874 18:01:11 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.874 18:01:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.874 ************************************ 00:06:18.874 START TEST non_locking_app_on_locked_coremask 00:06:18.874 ************************************ 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3240038 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3240038 /var/tmp/spdk.sock 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3240038 ']' 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:18.874 18:01:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.132 [2024-07-24 18:01:11.962048] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:19.132 [2024-07-24 18:01:11.962086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240038 ] 00:06:19.132 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.132 [2024-07-24 18:01:12.014560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.132 [2024-07-24 18:01:12.094075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.699 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.699 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:19.699 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:19.699 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3240203 00:06:19.699 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3240203 /var/tmp/spdk2.sock 00:06:19.700 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3240203 ']' 00:06:19.700 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.700 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.700 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.700 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.700 18:01:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.700 [2024-07-24 18:01:12.781430] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:19.700 [2024-07-24 18:01:12.781477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240203 ] 00:06:19.958 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.958 [2024-07-24 18:01:12.855680] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:19.958 [2024-07-24 18:01:12.855707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.958 [2024-07-24 18:01:13.009675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.523 18:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.523 18:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:20.523 18:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3240038 00:06:20.523 18:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3240038 00:06:20.523 18:01:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.088 lslocks: write error 00:06:21.088 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3240038 00:06:21.088 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3240038 ']' 00:06:21.088 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3240038 00:06:21.088 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.088 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.088 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3240038 00:06:21.089 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.089 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.089 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3240038' 00:06:21.089 killing process with pid 3240038 00:06:21.089 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3240038 00:06:21.089 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3240038 00:06:21.655 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3240203 00:06:21.655 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3240203 ']' 00:06:21.655 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3240203 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3240203 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3240203' 00:06:21.914 killing process with pid 3240203 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3240203 00:06:21.914 18:01:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3240203 00:06:22.172 00:06:22.172 real 0m3.178s 00:06:22.172 user 0m3.383s 00:06:22.172 sys 0m0.901s 00:06:22.172 18:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.172 18:01:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.172 ************************************ 00:06:22.172 END TEST non_locking_app_on_locked_coremask 00:06:22.172 ************************************ 00:06:22.172 18:01:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:22.172 18:01:15 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.172 18:01:15 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.172 18:01:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.172 ************************************ 00:06:22.172 START TEST locking_app_on_unlocked_coremask 00:06:22.172 ************************************ 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3240541 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3240541 /var/tmp/spdk.sock 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3240541 ']' 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.172 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.172 [2024-07-24 18:01:15.209679] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:22.172 [2024-07-24 18:01:15.209721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240541 ] 00:06:22.172 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.430 [2024-07-24 18:01:15.264752] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.430 [2024-07-24 18:01:15.264777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.430 [2024-07-24 18:01:15.343065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3240768 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3240768 /var/tmp/spdk2.sock 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3240768 ']' 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.997 18:01:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.997 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.997 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.997 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:22.997 [2024-07-24 18:01:16.047264] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:22.997 [2024-07-24 18:01:16.047308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3240768 ] 00:06:22.997 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.256 [2024-07-24 18:01:16.122353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.256 [2024-07-24 18:01:16.267901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.821 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.821 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:23.821 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3240768 00:06:23.821 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3240768 00:06:23.821 18:01:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.387 lslocks: write error 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3240541 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3240541 ']' 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3240541 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3240541 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3240541' 00:06:24.387 killing process with pid 3240541 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3240541 00:06:24.387 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3240541 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3240768 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3240768 ']' 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 3240768 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3240768 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3240768' 00:06:24.954 killing process with pid 3240768 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 3240768 00:06:24.954 18:01:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 3240768 00:06:25.215 00:06:25.215 real 0m3.132s 00:06:25.215 user 0m3.352s 00:06:25.215 sys 0m0.882s 00:06:25.215 18:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:25.215 18:01:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.215 ************************************ 00:06:25.215 END TEST locking_app_on_unlocked_coremask 00:06:25.215 ************************************ 00:06:25.506 18:01:18 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:25.506 18:01:18 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:25.506 18:01:18 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:25.506 18:01:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.506 ************************************ 00:06:25.506 START TEST locking_app_on_locked_coremask 00:06:25.506 ************************************ 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3241259 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3241259 /var/tmp/spdk.sock 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3241259 ']' 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:25.506 18:01:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.506 [2024-07-24 18:01:18.404931] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:25.506 [2024-07-24 18:01:18.404971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241259 ] 00:06:25.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.506 [2024-07-24 18:01:18.458896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.507 [2024-07-24 18:01:18.529601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3241273 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3241273 /var/tmp/spdk2.sock 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3241273 /var/tmp/spdk2.sock 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3241273 /var/tmp/spdk2.sock 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 3241273 ']' 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:26.441 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.441 [2024-07-24 18:01:19.235941] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:26.441 [2024-07-24 18:01:19.235982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241273 ] 00:06:26.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.441 [2024-07-24 18:01:19.309256] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3241259 has claimed it. 00:06:26.441 [2024-07-24 18:01:19.309294] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3241273) - No such process 00:06:27.007 ERROR: process (pid: 3241273) is no longer running 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3241259 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3241259 00:06:27.007 18:01:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.573 lslocks: write error 00:06:27.573 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3241259 00:06:27.573 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 3241259 ']' 00:06:27.573 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 3241259 00:06:27.573 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:27.573 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:27.573 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3241259 00:06:27.573 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:27.574 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:27.574 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3241259' 00:06:27.574 killing process with pid 3241259 00:06:27.574 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 3241259 00:06:27.574 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 3241259 00:06:27.832 00:06:27.832 real 0m2.420s 00:06:27.832 user 0m2.656s 00:06:27.832 sys 0m0.652s 00:06:27.832 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.832 18:01:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.832 ************************************ 00:06:27.832 END TEST locking_app_on_locked_coremask 00:06:27.832 ************************************ 00:06:27.832 18:01:20 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.832 18:01:20 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.832 18:01:20 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.832 18:01:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.832 ************************************ 00:06:27.832 START TEST locking_overlapped_coremask 00:06:27.832 ************************************ 00:06:27.832 18:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:27.832 18:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3241589 00:06:27.832 18:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3241589 /var/tmp/spdk.sock 00:06:27.832 18:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3241589 ']' 00:06:27.833 18:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.833 18:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.833 18:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.833 18:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.833 18:01:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.833 18:01:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.833 [2024-07-24 18:01:20.880747] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:27.833 [2024-07-24 18:01:20.880786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241589 ] 00:06:27.833 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.091 [2024-07-24 18:01:20.933659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.091 [2024-07-24 18:01:21.015079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.091 [2024-07-24 18:01:21.015097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.091 [2024-07-24 18:01:21.015099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3241766 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3241766 /var/tmp/spdk2.sock 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 3241766 /var/tmp/spdk2.sock 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 3241766 /var/tmp/spdk2.sock 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 3241766 ']' 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.658 18:01:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.658 [2024-07-24 18:01:21.720796] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:28.658 [2024-07-24 18:01:21.720844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241766 ] 00:06:28.916 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.916 [2024-07-24 18:01:21.795059] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3241589 has claimed it. 00:06:28.916 [2024-07-24 18:01:21.795100] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (3241766) - No such process 00:06:29.483 ERROR: process (pid: 3241766) is no longer running 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3241589 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 3241589 ']' 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 3241589 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3241589 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3241589' 00:06:29.483 killing process with pid 3241589 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 3241589 00:06:29.483 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 3241589 00:06:29.742 00:06:29.742 real 0m1.871s 00:06:29.742 user 0m5.272s 00:06:29.742 sys 0m0.392s 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.742 ************************************ 00:06:29.742 END TEST locking_overlapped_coremask 00:06:29.742 ************************************ 00:06:29.742 18:01:22 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:29.742 18:01:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.742 18:01:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.742 18:01:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.742 ************************************ 00:06:29.742 START TEST locking_overlapped_coremask_via_rpc 00:06:29.742 ************************************ 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3242024 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3242024 /var/tmp/spdk.sock 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3242024 ']' 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.742 18:01:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.001 [2024-07-24 18:01:22.825846] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:30.001 [2024-07-24 18:01:22.825889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242024 ] 00:06:30.001 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.001 [2024-07-24 18:01:22.882010] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.001 [2024-07-24 18:01:22.882037] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.001 [2024-07-24 18:01:22.954027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.001 [2024-07-24 18:01:22.954045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.001 [2024-07-24 18:01:22.954047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3242110 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3242110 /var/tmp/spdk2.sock 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3242110 ']' 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.565 18:01:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.823 [2024-07-24 18:01:23.671579] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:30.823 [2024-07-24 18:01:23.671630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242110 ] 00:06:30.823 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.823 [2024-07-24 18:01:23.761660] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.823 [2024-07-24 18:01:23.761693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.081 [2024-07-24 18:01:23.913518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:31.081 [2024-07-24 18:01:23.917541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.081 [2024-07-24 18:01:23.917541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.646 [2024-07-24 18:01:24.508563] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3242024 has claimed it. 00:06:31.646 request: 00:06:31.646 { 00:06:31.646 "method": "framework_enable_cpumask_locks", 00:06:31.646 "req_id": 1 00:06:31.646 } 00:06:31.646 Got JSON-RPC error response 00:06:31.646 response: 00:06:31.646 { 00:06:31.646 "code": -32603, 00:06:31.646 "message": "Failed to claim CPU core: 2" 00:06:31.646 } 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3242024 /var/tmp/spdk.sock 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3242024 ']' 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3242110 /var/tmp/spdk2.sock 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 3242110 ']' 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.646 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.904 00:06:31.904 real 0m2.109s 00:06:31.904 user 0m0.865s 00:06:31.904 sys 0m0.173s 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:31.904 18:01:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:31.904 ************************************ 00:06:31.904 END TEST locking_overlapped_coremask_via_rpc 00:06:31.904 ************************************ 00:06:31.904 18:01:24 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:31.904 18:01:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3242024 ]] 00:06:31.904 18:01:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3242024 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3242024 ']' 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3242024 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3242024 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3242024' 00:06:31.904 killing process with pid 3242024 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3242024 00:06:31.904 18:01:24 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3242024 00:06:32.471 18:01:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3242110 ]] 00:06:32.471 18:01:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3242110 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3242110 ']' 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3242110 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3242110 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3242110' 00:06:32.471 killing process with pid 3242110 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 3242110 00:06:32.471 18:01:25 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 3242110 00:06:32.729 18:01:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.729 18:01:25 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:32.729 18:01:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3242024 ]] 00:06:32.729 18:01:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3242024 00:06:32.729 18:01:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3242024 ']' 00:06:32.729 18:01:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3242024 00:06:32.729 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3242024) - No such process 00:06:32.729 18:01:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3242024 is not found' 00:06:32.729 Process with pid 3242024 is not found 00:06:32.730 18:01:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3242110 ]] 00:06:32.730 18:01:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3242110 00:06:32.730 18:01:25 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 3242110 ']' 00:06:32.730 18:01:25 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 3242110 00:06:32.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3242110) - No such process 00:06:32.730 18:01:25 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 3242110 is not found' 00:06:32.730 Process with pid 3242110 is not found 00:06:32.730 18:01:25 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:32.730 00:06:32.730 real 0m17.037s 00:06:32.730 user 0m29.252s 00:06:32.730 sys 0m4.902s 00:06:32.730 18:01:25 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.730 18:01:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.730 ************************************ 00:06:32.730 END TEST cpu_locks 00:06:32.730 ************************************ 00:06:32.730 00:06:32.730 real 0m41.697s 00:06:32.730 user 1m19.119s 00:06:32.730 sys 0m8.181s 00:06:32.730 18:01:25 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:32.730 18:01:25 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.730 ************************************ 00:06:32.730 END TEST event 00:06:32.730 ************************************ 00:06:32.730 18:01:25 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:32.730 18:01:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:32.730 18:01:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.730 18:01:25 -- common/autotest_common.sh@10 -- # set +x 00:06:32.730 ************************************ 00:06:32.730 START TEST thread 00:06:32.730 ************************************ 00:06:32.730 18:01:25 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:32.988 * Looking for test storage... 00:06:32.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:32.988 18:01:25 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.988 18:01:25 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:32.988 18:01:25 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:32.988 18:01:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.988 ************************************ 00:06:32.988 START TEST thread_poller_perf 00:06:32.988 ************************************ 00:06:32.988 18:01:25 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:32.988 [2024-07-24 18:01:25.890143] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:32.988 [2024-07-24 18:01:25.890217] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242595 ] 00:06:32.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.988 [2024-07-24 18:01:25.946934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.988 [2024-07-24 18:01:26.018259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.988 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.362 ====================================== 00:06:34.362 busy:2107481776 (cyc) 00:06:34.362 total_run_count: 425000 00:06:34.362 tsc_hz: 2100000000 (cyc) 00:06:34.362 ====================================== 00:06:34.362 poller_cost: 4958 (cyc), 2360 (nsec) 00:06:34.362 00:06:34.362 real 0m1.222s 00:06:34.362 user 0m1.142s 00:06:34.362 sys 0m0.075s 00:06:34.362 18:01:27 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:34.362 18:01:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:34.362 ************************************ 00:06:34.362 END TEST thread_poller_perf 00:06:34.363 ************************************ 00:06:34.363 18:01:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.363 18:01:27 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:34.363 18:01:27 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:34.363 18:01:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:34.363 ************************************ 00:06:34.363 START TEST thread_poller_perf 00:06:34.363 ************************************ 00:06:34.363 18:01:27 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.363 [2024-07-24 18:01:27.161779] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:34.363 [2024-07-24 18:01:27.161846] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242849 ] 00:06:34.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.363 [2024-07-24 18:01:27.219441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.363 [2024-07-24 18:01:27.290521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.363 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:35.296 ====================================== 00:06:35.296 busy:2101564114 (cyc) 00:06:35.296 total_run_count: 5644000 00:06:35.296 tsc_hz: 2100000000 (cyc) 00:06:35.296 ====================================== 00:06:35.296 poller_cost: 372 (cyc), 177 (nsec) 00:06:35.296 00:06:35.297 real 0m1.216s 00:06:35.297 user 0m1.138s 00:06:35.297 sys 0m0.074s 00:06:35.297 18:01:28 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.297 18:01:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.297 ************************************ 00:06:35.297 END TEST thread_poller_perf 00:06:35.297 ************************************ 00:06:35.555 18:01:28 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:35.555 00:06:35.555 real 0m2.626s 00:06:35.555 user 0m2.350s 00:06:35.555 sys 0m0.280s 00:06:35.555 18:01:28 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.555 18:01:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.555 ************************************ 00:06:35.555 END TEST thread 00:06:35.555 ************************************ 00:06:35.555 18:01:28 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:06:35.555 18:01:28 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:35.555 18:01:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.555 18:01:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.555 18:01:28 -- common/autotest_common.sh@10 -- # set +x 00:06:35.555 ************************************ 00:06:35.555 START TEST app_cmdline 00:06:35.555 ************************************ 00:06:35.555 18:01:28 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:35.555 * Looking for test storage... 00:06:35.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:35.555 18:01:28 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:35.555 18:01:28 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3243134 00:06:35.555 18:01:28 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3243134 00:06:35.555 18:01:28 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:35.555 18:01:28 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 3243134 ']' 00:06:35.555 18:01:28 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.555 18:01:28 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.555 18:01:28 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.555 18:01:28 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.555 18:01:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.555 [2024-07-24 18:01:28.566915] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:35.555 [2024-07-24 18:01:28.566963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243134 ] 00:06:35.555 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.555 [2024-07-24 18:01:28.620403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.812 [2024-07-24 18:01:28.694576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.379 18:01:29 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.379 18:01:29 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:36.379 18:01:29 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:36.637 { 00:06:36.637 "version": "SPDK v24.09-pre git sha1 ac4b3e123", 00:06:36.637 "fields": { 00:06:36.637 "major": 24, 00:06:36.637 "minor": 9, 00:06:36.637 "patch": 0, 00:06:36.637 "suffix": "-pre", 00:06:36.637 "commit": "ac4b3e123" 00:06:36.637 } 00:06:36.637 } 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:36.637 18:01:29 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:36.637 18:01:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.637 18:01:29 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:36.637 18:01:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.637 18:01:29 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:36.637 18:01:29 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.637 18:01:29 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:36.638 18:01:29 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:36.638 request: 00:06:36.638 { 00:06:36.638 "method": "env_dpdk_get_mem_stats", 00:06:36.638 "req_id": 1 00:06:36.638 } 00:06:36.638 Got JSON-RPC error response 00:06:36.638 response: 00:06:36.638 { 00:06:36.638 "code": -32601, 00:06:36.638 "message": "Method not found" 00:06:36.638 } 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.896 18:01:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3243134 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 3243134 ']' 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 3243134 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3243134 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3243134' 00:06:36.896 killing process with pid 3243134 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@969 -- # kill 3243134 00:06:36.896 18:01:29 app_cmdline -- common/autotest_common.sh@974 -- # wait 3243134 00:06:37.154 00:06:37.154 real 0m1.632s 00:06:37.154 user 0m1.937s 00:06:37.154 sys 0m0.416s 00:06:37.154 18:01:30 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.154 18:01:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:37.154 ************************************ 00:06:37.154 END TEST app_cmdline 00:06:37.154 ************************************ 00:06:37.154 18:01:30 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:37.154 18:01:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.154 18:01:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.154 18:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.154 ************************************ 00:06:37.154 START TEST version 00:06:37.154 ************************************ 00:06:37.154 18:01:30 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:37.154 * Looking for test storage... 00:06:37.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:37.154 18:01:30 version -- app/version.sh@17 -- # get_header_version major 00:06:37.154 18:01:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.154 18:01:30 version -- app/version.sh@14 -- # cut -f2 00:06:37.154 18:01:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.154 18:01:30 version -- app/version.sh@17 -- # major=24 00:06:37.412 18:01:30 version -- app/version.sh@18 -- # get_header_version minor 00:06:37.412 18:01:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.412 18:01:30 version -- app/version.sh@14 -- # cut -f2 00:06:37.412 18:01:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.412 18:01:30 version -- app/version.sh@18 -- # minor=9 00:06:37.412 18:01:30 version -- app/version.sh@19 -- # get_header_version patch 00:06:37.412 18:01:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.412 18:01:30 version -- app/version.sh@14 -- # cut -f2 00:06:37.412 18:01:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.412 18:01:30 version -- app/version.sh@19 -- # patch=0 00:06:37.412 18:01:30 version -- app/version.sh@20 -- # get_header_version suffix 00:06:37.412 18:01:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:37.412 18:01:30 version -- app/version.sh@14 -- # cut -f2 00:06:37.412 18:01:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:37.412 18:01:30 version -- app/version.sh@20 -- # suffix=-pre 00:06:37.412 18:01:30 version -- app/version.sh@22 -- # version=24.9 00:06:37.412 18:01:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:37.412 18:01:30 version -- app/version.sh@28 -- # version=24.9rc0 00:06:37.412 18:01:30 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:37.413 18:01:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:37.413 18:01:30 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:37.413 18:01:30 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:37.413 00:06:37.413 real 0m0.148s 00:06:37.413 user 0m0.083s 00:06:37.413 sys 0m0.101s 00:06:37.413 18:01:30 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.413 18:01:30 version -- common/autotest_common.sh@10 -- # set +x 00:06:37.413 ************************************ 00:06:37.413 END TEST version 00:06:37.413 ************************************ 00:06:37.413 18:01:30 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@202 -- # uname -s 00:06:37.413 18:01:30 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:06:37.413 18:01:30 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:37.413 18:01:30 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:06:37.413 18:01:30 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@264 -- # timing_exit lib 00:06:37.413 18:01:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:37.413 18:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.413 18:01:30 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:06:37.413 18:01:30 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:06:37.413 18:01:30 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.413 18:01:30 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.413 18:01:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.413 18:01:30 -- common/autotest_common.sh@10 -- # set +x 00:06:37.413 ************************************ 00:06:37.413 START TEST nvmf_tcp 00:06:37.413 ************************************ 00:06:37.413 18:01:30 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:37.413 * Looking for test storage... 00:06:37.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:37.413 18:01:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:37.413 18:01:30 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.413 18:01:30 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.413 18:01:30 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.413 18:01:30 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.413 18:01:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:37.671 ************************************ 00:06:37.671 START TEST nvmf_target_core 00:06:37.671 ************************************ 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:37.671 * Looking for test storage... 00:06:37.671 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.671 18:01:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.672 ************************************ 00:06:37.672 START TEST nvmf_abort 00:06:37.672 ************************************ 00:06:37.672 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:37.672 * Looking for test storage... 00:06:37.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.931 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:37.932 18:01:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:43.194 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:43.194 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:43.194 Found net devices under 0000:86:00.0: cvl_0_0 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:43.194 Found net devices under 0000:86:00.1: cvl_0_1 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:43.194 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:43.195 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:43.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:06:43.452 00:06:43.452 --- 10.0.0.2 ping statistics --- 00:06:43.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.452 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:43.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:06:43.452 00:06:43.452 --- 10.0.0.1 ping statistics --- 00:06:43.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.452 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3246775 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3246775 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 3246775 ']' 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.452 18:01:36 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:43.452 [2024-07-24 18:01:36.407248] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:43.452 [2024-07-24 18:01:36.407289] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.452 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.452 [2024-07-24 18:01:36.466346] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.709 [2024-07-24 18:01:36.547720] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.709 [2024-07-24 18:01:36.547752] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.709 [2024-07-24 18:01:36.547759] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.709 [2024-07-24 18:01:36.547765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.709 [2024-07-24 18:01:36.547769] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.709 [2024-07-24 18:01:36.547803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.709 [2024-07-24 18:01:36.547821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:43.709 [2024-07-24 18:01:36.547822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 [2024-07-24 18:01:37.247261] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 Malloc0 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 Delay0 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 [2024-07-24 18:01:37.321853] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:44.275 18:01:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:44.534 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.534 [2024-07-24 18:01:37.428257] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:47.062 Initializing NVMe Controllers 00:06:47.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:47.062 controller IO queue size 128 less than required 00:06:47.062 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:47.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:47.062 Initialization complete. Launching workers. 00:06:47.062 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 44985 00:06:47.062 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 45046, failed to submit 62 00:06:47.062 success 44989, unsuccess 57, failed 0 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:47.062 rmmod nvme_tcp 00:06:47.062 rmmod nvme_fabrics 00:06:47.062 rmmod nvme_keyring 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3246775 ']' 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3246775 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 3246775 ']' 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 3246775 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3246775 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3246775' 00:06:47.062 killing process with pid 3246775 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 3246775 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 3246775 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.062 18:01:39 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.965 18:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:48.965 00:06:48.965 real 0m11.335s 00:06:48.965 user 0m13.407s 00:06:48.965 sys 0m5.226s 00:06:48.965 18:01:41 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.965 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:48.965 ************************************ 00:06:48.965 END TEST nvmf_abort 00:06:48.965 ************************************ 00:06:48.965 18:01:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:48.965 18:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:48.965 18:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.965 18:01:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.246 ************************************ 00:06:49.246 START TEST nvmf_ns_hotplug_stress 00:06:49.246 ************************************ 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:49.246 * Looking for test storage... 00:06:49.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:49.246 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:49.247 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.247 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.247 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.247 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:49.247 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:49.247 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:49.247 18:01:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:54.554 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:54.554 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:54.554 Found net devices under 0000:86:00.0: cvl_0_0 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:54.554 Found net devices under 0000:86:00.1: cvl_0_1 00:06:54.554 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:06:54.555 00:06:54.555 --- 10.0.0.2 ping statistics --- 00:06:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.555 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:06:54.555 00:06:54.555 --- 10.0.0.1 ping statistics --- 00:06:54.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.555 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3250783 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3250783 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 3250783 ']' 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.555 18:01:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:54.555 [2024-07-24 18:01:47.421935] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:06:54.555 [2024-07-24 18:01:47.421977] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.555 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.555 [2024-07-24 18:01:47.479480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:54.555 [2024-07-24 18:01:47.558027] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.555 [2024-07-24 18:01:47.558062] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.555 [2024-07-24 18:01:47.558069] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.555 [2024-07-24 18:01:47.558075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.555 [2024-07-24 18:01:47.558079] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.555 [2024-07-24 18:01:47.558114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.555 [2024-07-24 18:01:47.558199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.555 [2024-07-24 18:01:47.558201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:55.487 [2024-07-24 18:01:48.422657] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.487 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.745 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.745 [2024-07-24 18:01:48.808644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.003 18:01:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:56.003 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:56.260 Malloc0 00:06:56.261 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:56.261 Delay0 00:06:56.518 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.518 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:56.777 NULL1 00:06:56.777 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:57.035 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3251238 00:06:57.035 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:57.035 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:06:57.035 18:01:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.035 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.035 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.293 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:57.293 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:57.550 true 00:06:57.550 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:06:57.550 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.808 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.808 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:57.808 18:01:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:58.065 true 00:06:58.065 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:06:58.065 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.323 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.581 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:58.581 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:58.581 true 00:06:58.839 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:06:58.839 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.839 18:01:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.096 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:59.096 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:59.354 true 00:06:59.354 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:06:59.354 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.354 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.612 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:59.612 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:59.869 true 00:06:59.869 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:06:59.869 18:01:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.127 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.383 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:00.383 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:00.383 true 00:07:00.383 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:00.383 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.641 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.899 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:00.899 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:00.899 true 00:07:00.899 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:00.899 18:01:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.157 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.414 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:01.414 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:01.672 true 00:07:01.672 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:01.672 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.930 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.930 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:01.930 18:01:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:02.187 true 00:07:02.187 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:02.188 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.445 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.702 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:02.702 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:02.702 true 00:07:02.702 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:02.702 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.960 18:01:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.217 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:03.217 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:03.217 true 00:07:03.475 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:03.475 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.475 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.732 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:03.732 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:03.990 true 00:07:03.990 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:03.990 18:01:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.247 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.247 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:04.248 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:04.505 true 00:07:04.505 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:04.505 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.763 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.020 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:05.020 18:01:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:05.020 true 00:07:05.020 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:05.020 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.278 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.535 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:05.535 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:05.535 true 00:07:05.793 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:05.793 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.793 18:01:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.051 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:06.051 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:06.309 true 00:07:06.309 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:06.309 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.567 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.567 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:06.567 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:06.824 true 00:07:06.824 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:06.824 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.083 18:01:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.383 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:07.383 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:07.383 true 00:07:07.383 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:07.383 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.671 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.929 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:07.929 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:07.929 true 00:07:07.929 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:07.929 18:02:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.186 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.444 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:08.444 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:08.444 true 00:07:08.444 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:08.444 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.701 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.962 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:08.962 18:02:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:09.290 true 00:07:09.290 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:09.290 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.290 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.547 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:09.547 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:09.804 true 00:07:09.804 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:09.804 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.804 18:02:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.062 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:10.062 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:10.321 true 00:07:10.321 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:10.321 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.577 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.834 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:10.834 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:10.834 true 00:07:10.834 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:10.834 18:02:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.091 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.349 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:11.349 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:11.606 true 00:07:11.606 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:11.606 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.606 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.864 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:11.864 18:02:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:12.121 true 00:07:12.121 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:12.121 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.379 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.636 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:12.636 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:12.636 true 00:07:12.636 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:12.636 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.894 18:02:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.152 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:13.152 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:13.409 true 00:07:13.409 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:13.409 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.409 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.667 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:13.667 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:13.924 true 00:07:13.924 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:13.924 18:02:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.181 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.181 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:14.181 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:14.438 true 00:07:14.438 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:14.438 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.696 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.954 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:14.954 18:02:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:14.954 true 00:07:15.211 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:15.211 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.211 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.469 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:15.469 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:15.727 true 00:07:15.727 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:15.727 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.984 18:02:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.242 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:16.242 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:16.242 true 00:07:16.242 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:16.242 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.500 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.758 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:16.758 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:16.758 true 00:07:17.015 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:17.015 18:02:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.015 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.273 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:17.273 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:17.531 true 00:07:17.531 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:17.531 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.788 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.788 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:17.788 18:02:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:18.045 true 00:07:18.045 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:18.045 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.303 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.559 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:18.559 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:18.559 true 00:07:18.559 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:18.559 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.817 18:02:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.075 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:19.075 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:19.332 true 00:07:19.332 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:19.332 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.590 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.590 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:19.590 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:19.848 true 00:07:19.848 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:19.848 18:02:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.106 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.373 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:20.373 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:20.373 true 00:07:20.373 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:20.373 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.634 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.892 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:20.892 18:02:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:21.149 true 00:07:21.149 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:21.149 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.149 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.407 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:21.407 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:21.664 true 00:07:21.664 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:21.664 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.923 18:02:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.180 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:22.180 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:22.180 true 00:07:22.180 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:22.180 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.437 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.695 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:22.695 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:22.952 true 00:07:22.952 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:22.952 18:02:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.952 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.251 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:23.251 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:23.534 true 00:07:23.534 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:23.534 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.534 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.791 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:23.791 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:24.049 true 00:07:24.049 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:24.049 18:02:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.306 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.564 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:24.564 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:24.564 true 00:07:24.564 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:24.564 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.822 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.080 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:25.080 18:02:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:25.080 true 00:07:25.338 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:25.338 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.338 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.595 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:25.595 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:25.852 true 00:07:25.852 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:25.852 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.110 18:02:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.367 18:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:26.367 18:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:26.367 true 00:07:26.367 18:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:26.367 18:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.625 18:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.883 18:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:26.883 18:02:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:27.141 true 00:07:27.141 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:27.141 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.141 Initializing NVMe Controllers 00:07:27.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:27.141 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:07:27.141 Controller IO queue size 128, less than required. 00:07:27.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.141 WARNING: Some requested NVMe devices were skipped 00:07:27.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:27.141 Initialization complete. Launching workers. 00:07:27.141 ======================================================== 00:07:27.141 Latency(us) 00:07:27.141 Device Information : IOPS MiB/s Average min max 00:07:27.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 28154.01 13.75 4546.39 2304.35 9170.15 00:07:27.141 ======================================================== 00:07:27.141 Total : 28154.01 13.75 4546.39 2304.35 9170.15 00:07:27.141 00:07:27.141 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.399 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:07:27.399 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:07:27.656 true 00:07:27.656 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3251238 00:07:27.656 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3251238) - No such process 00:07:27.656 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3251238 00:07:27.656 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.656 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:27.914 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:27.914 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:27.914 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:27.914 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.914 18:02:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:28.172 null0 00:07:28.172 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.172 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.172 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:28.172 null1 00:07:28.430 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.430 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.430 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:28.430 null2 00:07:28.430 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.430 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.430 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:28.688 null3 00:07:28.688 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.688 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.688 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:28.946 null4 00:07:28.946 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.946 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.946 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:28.946 null5 00:07:28.946 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.946 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.946 18:02:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:29.204 null6 00:07:29.204 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.204 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.204 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:29.463 null7 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.463 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3256734 3256735 3256737 3256739 3256741 3256743 3256745 3256747 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.464 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.722 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.981 18:02:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.239 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.504 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.505 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.763 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.020 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.021 18:02:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.021 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.278 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.536 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.794 18:02:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.052 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.310 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.311 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.311 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.311 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.311 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.311 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.569 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:32.570 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.828 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:33.086 18:02:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:33.086 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:33.344 rmmod nvme_tcp 00:07:33.344 rmmod nvme_fabrics 00:07:33.344 rmmod nvme_keyring 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3250783 ']' 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3250783 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 3250783 ']' 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 3250783 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3250783 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3250783' 00:07:33.344 killing process with pid 3250783 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 3250783 00:07:33.344 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 3250783 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:33.621 18:02:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.524 00:07:35.524 real 0m46.459s 00:07:35.524 user 3m17.330s 00:07:35.524 sys 0m16.645s 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.524 ************************************ 00:07:35.524 END TEST nvmf_ns_hotplug_stress 00:07:35.524 ************************************ 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.524 ************************************ 00:07:35.524 START TEST nvmf_delete_subsystem 00:07:35.524 ************************************ 00:07:35.524 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.782 * Looking for test storage... 00:07:35.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.783 18:02:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:41.048 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:41.048 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:41.048 Found net devices under 0000:86:00.0: cvl_0_0 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:41.048 Found net devices under 0000:86:00.1: cvl_0_1 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.048 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.049 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.049 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.049 18:02:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.049 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.049 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.049 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.049 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.049 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:07:41.307 00:07:41.307 --- 10.0.0.2 ping statistics --- 00:07:41.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.307 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:07:41.307 00:07:41.307 --- 10.0.0.1 ping statistics --- 00:07:41.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.307 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3261107 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3261107 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 3261107 ']' 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:41.307 18:02:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.307 [2024-07-24 18:02:34.222537] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:07:41.307 [2024-07-24 18:02:34.222576] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.307 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.307 [2024-07-24 18:02:34.278749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:41.307 [2024-07-24 18:02:34.357940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.307 [2024-07-24 18:02:34.357974] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.307 [2024-07-24 18:02:34.357981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.307 [2024-07-24 18:02:34.357988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.307 [2024-07-24 18:02:34.357993] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.307 [2024-07-24 18:02:34.358033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.307 [2024-07-24 18:02:34.358035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 [2024-07-24 18:02:35.074318] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 [2024-07-24 18:02:35.094484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 NULL1 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 Delay0 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3261283 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:42.240 18:02:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:42.240 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.240 [2024-07-24 18:02:35.175136] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:44.166 18:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:44.166 18:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:44.166 18:02:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 Write completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 starting I/O failed: -6 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.424 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 [2024-07-24 18:02:37.264298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91f710 is same with the state(5) to be set 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Write completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 Read completed with error (sct=0, sc=8) 00:07:44.425 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 Write completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Write completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 Write completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Write completed with error (sct=0, sc=8) 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Read completed with error (sct=0, sc=8) 00:07:44.426 Write completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:44.426 Write completed with error (sct=0, sc=8) 00:07:44.426 starting I/O failed: -6 00:07:45.359 [2024-07-24 18:02:38.228643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x920ac0 is same with the state(5) to be set 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 [2024-07-24 18:02:38.265913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ff95c00d330 is same with the state(5) to be set 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 [2024-07-24 18:02:38.266648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91fa40 is same with the state(5) to be set 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Write completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.359 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 [2024-07-24 18:02:38.266812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91f3e0 is same with the state(5) to be set 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Write completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 Read completed with error (sct=0, sc=8) 00:07:45.360 [2024-07-24 18:02:38.266968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x91f000 is same with the state(5) to be set 00:07:45.360 Initializing NVMe Controllers 00:07:45.360 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:45.360 Controller IO queue size 128, less than required. 00:07:45.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:45.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:45.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:45.360 Initialization complete. Launching workers. 00:07:45.360 ======================================================== 00:07:45.360 Latency(us) 00:07:45.360 Device Information : IOPS MiB/s Average min max 00:07:45.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.84 0.09 1010254.49 921.36 2001920.39 00:07:45.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.86 0.08 876041.54 319.67 2000918.54 00:07:45.360 ======================================================== 00:07:45.360 Total : 351.70 0.17 943906.28 319.67 2001920.39 00:07:45.360 00:07:45.360 [2024-07-24 18:02:38.267557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x920ac0 (9): Bad file descriptor 00:07:45.360 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:45.360 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.360 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:45.360 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3261283 00:07:45.360 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:45.925 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:45.925 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3261283 00:07:45.925 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3261283) - No such process 00:07:45.925 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3261283 00:07:45.925 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:07:45.925 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3261283 00:07:45.925 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3261283 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.926 [2024-07-24 18:02:38.798247] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3261832 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:45.926 18:02:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.926 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.926 [2024-07-24 18:02:38.857807] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:46.489 18:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.489 18:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:46.489 18:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.745 18:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.745 18:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:46.745 18:02:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.308 18:02:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.308 18:02:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:47.308 18:02:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.873 18:02:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.873 18:02:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:47.873 18:02:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.436 18:02:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.436 18:02:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:48.437 18:02:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.000 18:02:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.000 18:02:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:49.000 18:02:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:49.000 Initializing NVMe Controllers 00:07:49.000 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:49.000 Controller IO queue size 128, less than required. 00:07:49.000 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:49.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:49.000 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:49.000 Initialization complete. Launching workers. 00:07:49.000 ======================================================== 00:07:49.000 Latency(us) 00:07:49.000 Device Information : IOPS MiB/s Average min max 00:07:49.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003075.80 1000145.23 1040831.02 00:07:49.000 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004508.10 1000131.97 1011711.80 00:07:49.000 ======================================================== 00:07:49.000 Total : 256.00 0.12 1003791.95 1000131.97 1040831.02 00:07:49.000 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3261832 00:07:49.566 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3261832) - No such process 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3261832 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:49.566 rmmod nvme_tcp 00:07:49.566 rmmod nvme_fabrics 00:07:49.566 rmmod nvme_keyring 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3261107 ']' 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3261107 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 3261107 ']' 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 3261107 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3261107 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3261107' 00:07:49.566 killing process with pid 3261107 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 3261107 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 3261107 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:49.566 18:02:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.094 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:52.094 00:07:52.094 real 0m16.116s 00:07:52.094 user 0m30.161s 00:07:52.094 sys 0m4.901s 00:07:52.094 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.094 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:52.095 ************************************ 00:07:52.095 END TEST nvmf_delete_subsystem 00:07:52.095 ************************************ 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:52.095 ************************************ 00:07:52.095 START TEST nvmf_host_management 00:07:52.095 ************************************ 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:52.095 * Looking for test storage... 00:07:52.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:52.095 18:02:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.278 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:56.537 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:56.537 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:56.537 Found net devices under 0000:86:00.0: cvl_0_0 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.537 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:56.538 Found net devices under 0000:86:00.1: cvl_0_1 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.538 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.538 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:07:56.538 00:07:56.538 --- 10.0.0.2 ping statistics --- 00:07:56.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.538 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.538 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.538 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:07:56.538 00:07:56.538 --- 10.0.0.1 ping statistics --- 00:07:56.538 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.538 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:56.538 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3265827 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3265827 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3265827 ']' 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:56.797 18:02:49 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.797 [2024-07-24 18:02:49.668090] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:07:56.797 [2024-07-24 18:02:49.668133] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.797 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.797 [2024-07-24 18:02:49.725156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.797 [2024-07-24 18:02:49.804842] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.797 [2024-07-24 18:02:49.804877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.797 [2024-07-24 18:02:49.804884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.797 [2024-07-24 18:02:49.804890] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.797 [2024-07-24 18:02:49.804895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.797 [2024-07-24 18:02:49.804993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.797 [2024-07-24 18:02:49.805012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.797 [2024-07-24 18:02:49.805123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.797 [2024-07-24 18:02:49.805124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 [2024-07-24 18:02:50.525892] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 Malloc0 00:07:57.730 [2024-07-24 18:02:50.585601] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.730 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3266093 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3266093 /var/tmp/bdevperf.sock 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 3266093 ']' 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:57.731 { 00:07:57.731 "params": { 00:07:57.731 "name": "Nvme$subsystem", 00:07:57.731 "trtype": "$TEST_TRANSPORT", 00:07:57.731 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.731 "adrfam": "ipv4", 00:07:57.731 "trsvcid": "$NVMF_PORT", 00:07:57.731 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.731 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.731 "hdgst": ${hdgst:-false}, 00:07:57.731 "ddgst": ${ddgst:-false} 00:07:57.731 }, 00:07:57.731 "method": "bdev_nvme_attach_controller" 00:07:57.731 } 00:07:57.731 EOF 00:07:57.731 )") 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:57.731 18:02:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:57.731 "params": { 00:07:57.731 "name": "Nvme0", 00:07:57.731 "trtype": "tcp", 00:07:57.731 "traddr": "10.0.0.2", 00:07:57.731 "adrfam": "ipv4", 00:07:57.731 "trsvcid": "4420", 00:07:57.731 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.731 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:57.731 "hdgst": false, 00:07:57.731 "ddgst": false 00:07:57.731 }, 00:07:57.731 "method": "bdev_nvme_attach_controller" 00:07:57.731 }' 00:07:57.731 [2024-07-24 18:02:50.678165] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:07:57.731 [2024-07-24 18:02:50.678209] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266093 ] 00:07:57.731 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.731 [2024-07-24 18:02:50.734023] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.731 [2024-07-24 18:02:50.806677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.297 Running I/O for 10 seconds... 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:58.556 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=771 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 771 -ge 100 ']' 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.557 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.557 [2024-07-24 18:02:51.562162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.557 [2024-07-24 18:02:51.562198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.562208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.557 [2024-07-24 18:02:51.562215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.562222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.557 [2024-07-24 18:02:51.562229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.562236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.557 [2024-07-24 18:02:51.562242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.562249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x718980 is same with the state(5) to be set 00:07:58.557 [2024-07-24 18:02:51.563482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.557 [2024-07-24 18:02:51.563898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.557 [2024-07-24 18:02:51.563904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.563912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.563919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.563928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.563934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.563942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.563948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.563956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.563963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.563970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.563976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.563984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.563991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.563998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:111104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:111616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:111744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.558 [2024-07-24 18:02:51.564424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.558 [2024-07-24 18:02:51.564431] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb4a660 is same with the state(5) to be set 00:07:58.558 [2024-07-24 18:02:51.564480] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb4a660 was disconnected and freed. reset controller. 00:07:58.558 [2024-07-24 18:02:51.565403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:58.558 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.558 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.559 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.559 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.559 task offset: 114560 on job bdev=Nvme0n1 fails 00:07:58.559 00:07:58.559 Latency(us) 00:07:58.559 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.559 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.559 Job: Nvme0n1 ended in about 0.44 seconds with error 00:07:58.559 Verification LBA range: start 0x0 length 0x400 00:07:58.559 Nvme0n1 : 0.44 1870.65 116.92 143.90 0.00 30991.01 4400.27 27088.21 00:07:58.559 =================================================================================================================== 00:07:58.559 Total : 1870.65 116.92 143.90 0.00 30991.01 4400.27 27088.21 00:07:58.559 [2024-07-24 18:02:51.567010] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.559 [2024-07-24 18:02:51.567024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x718980 (9): Bad file descriptor 00:07:58.559 [2024-07-24 18:02:51.568419] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:58.559 [2024-07-24 18:02:51.568501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:58.559 [2024-07-24 18:02:51.568523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.559 [2024-07-24 18:02:51.568537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:58.559 [2024-07-24 18:02:51.568544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:58.559 [2024-07-24 18:02:51.568551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:58.559 [2024-07-24 18:02:51.568558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x718980 00:07:58.559 [2024-07-24 18:02:51.568576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x718980 (9): Bad file descriptor 00:07:58.559 [2024-07-24 18:02:51.568595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:07:58.559 [2024-07-24 18:02:51.568602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:07:58.559 [2024-07-24 18:02:51.568610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:07:58.559 [2024-07-24 18:02:51.568621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:07:58.559 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.559 18:02:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3266093 00:07:59.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3266093) - No such process 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:59.933 { 00:07:59.933 "params": { 00:07:59.933 "name": "Nvme$subsystem", 00:07:59.933 "trtype": "$TEST_TRANSPORT", 00:07:59.933 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.933 "adrfam": "ipv4", 00:07:59.933 "trsvcid": "$NVMF_PORT", 00:07:59.933 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.933 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.933 "hdgst": ${hdgst:-false}, 00:07:59.933 "ddgst": ${ddgst:-false} 00:07:59.933 }, 00:07:59.933 "method": "bdev_nvme_attach_controller" 00:07:59.933 } 00:07:59.933 EOF 00:07:59.933 )") 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:59.933 18:02:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:59.933 "params": { 00:07:59.933 "name": "Nvme0", 00:07:59.933 "trtype": "tcp", 00:07:59.933 "traddr": "10.0.0.2", 00:07:59.933 "adrfam": "ipv4", 00:07:59.933 "trsvcid": "4420", 00:07:59.933 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.933 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:59.933 "hdgst": false, 00:07:59.933 "ddgst": false 00:07:59.933 }, 00:07:59.933 "method": "bdev_nvme_attach_controller" 00:07:59.933 }' 00:07:59.933 [2024-07-24 18:02:52.630782] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:07:59.933 [2024-07-24 18:02:52.630830] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266343 ] 00:07:59.933 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.933 [2024-07-24 18:02:52.685841] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.933 [2024-07-24 18:02:52.757178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.933 Running I/O for 1 seconds... 00:08:01.321 00:08:01.321 Latency(us) 00:08:01.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.321 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:01.321 Verification LBA range: start 0x0 length 0x400 00:08:01.321 Nvme0n1 : 1.02 2006.74 125.42 0.00 0.00 31404.20 5523.75 26838.55 00:08:01.321 =================================================================================================================== 00:08:01.321 Total : 2006.74 125.42 0.00 0.00 31404.20 5523.75 26838.55 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:01.321 rmmod nvme_tcp 00:08:01.321 rmmod nvme_fabrics 00:08:01.321 rmmod nvme_keyring 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3265827 ']' 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3265827 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 3265827 ']' 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 3265827 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3265827 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3265827' 00:08:01.321 killing process with pid 3265827 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 3265827 00:08:01.321 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 3265827 00:08:01.580 [2024-07-24 18:02:54.459248] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.580 18:02:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.481 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.481 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:03.481 00:08:03.481 real 0m11.776s 00:08:03.481 user 0m22.311s 00:08:03.481 sys 0m4.591s 00:08:03.481 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.481 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.481 ************************************ 00:08:03.481 END TEST nvmf_host_management 00:08:03.481 ************************************ 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.740 ************************************ 00:08:03.740 START TEST nvmf_lvol 00:08:03.740 ************************************ 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:03.740 * Looking for test storage... 00:08:03.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.740 18:02:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:10.403 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:10.403 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:10.403 Found net devices under 0000:86:00.0: cvl_0_0 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:10.403 Found net devices under 0000:86:00.1: cvl_0_1 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:10.403 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:10.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:10.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:08:10.404 00:08:10.404 --- 10.0.0.2 ping statistics --- 00:08:10.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.404 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:10.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:10.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:08:10.404 00:08:10.404 --- 10.0.0.1 ping statistics --- 00:08:10.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:10.404 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3270275 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3270275 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 3270275 ']' 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:10.404 18:03:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.404 [2024-07-24 18:03:02.565800] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:08:10.404 [2024-07-24 18:03:02.565844] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.404 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.404 [2024-07-24 18:03:02.618786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:10.404 [2024-07-24 18:03:02.703543] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.404 [2024-07-24 18:03:02.703575] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.404 [2024-07-24 18:03:02.703582] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:10.404 [2024-07-24 18:03:02.703587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:10.404 [2024-07-24 18:03:02.703592] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.404 [2024-07-24 18:03:02.703634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.404 [2024-07-24 18:03:02.703731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.404 [2024-07-24 18:03:02.703732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.404 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.404 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:10.404 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.404 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:10.404 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.404 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.404 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:10.667 [2024-07-24 18:03:03.569256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.667 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.925 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:10.925 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.925 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:10.925 18:03:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:11.182 18:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:11.440 18:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f88dba59-e399-4210-b4ba-2a3563218ec8 00:08:11.440 18:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f88dba59-e399-4210-b4ba-2a3563218ec8 lvol 20 00:08:11.698 18:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6f82634a-676f-4339-ba45-a5197594f43e 00:08:11.698 18:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.698 18:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6f82634a-676f-4339-ba45-a5197594f43e 00:08:11.955 18:03:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.213 [2024-07-24 18:03:05.039170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.213 18:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.213 18:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3270749 00:08:12.214 18:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:12.214 18:03:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:12.214 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.582 18:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6f82634a-676f-4339-ba45-a5197594f43e MY_SNAPSHOT 00:08:13.582 18:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2183d701-403c-432f-89eb-44150b308cd5 00:08:13.582 18:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6f82634a-676f-4339-ba45-a5197594f43e 30 00:08:13.839 18:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2183d701-403c-432f-89eb-44150b308cd5 MY_CLONE 00:08:14.096 18:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=408146e2-04e4-48d2-9080-267c34c10496 00:08:14.096 18:03:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 408146e2-04e4-48d2-9080-267c34c10496 00:08:14.660 18:03:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3270749 00:08:22.748 Initializing NVMe Controllers 00:08:22.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:22.748 Controller IO queue size 128, less than required. 00:08:22.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:22.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:22.748 Initialization complete. Launching workers. 00:08:22.748 ======================================================== 00:08:22.748 Latency(us) 00:08:22.748 Device Information : IOPS MiB/s Average min max 00:08:22.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12413.23 48.49 10320.14 1496.97 103413.05 00:08:22.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12253.33 47.86 10451.89 3238.13 42537.55 00:08:22.748 ======================================================== 00:08:22.748 Total : 24666.56 96.35 10385.59 1496.97 103413.05 00:08:22.748 00:08:22.748 18:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.748 18:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6f82634a-676f-4339-ba45-a5197594f43e 00:08:23.005 18:03:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f88dba59-e399-4210-b4ba-2a3563218ec8 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.263 rmmod nvme_tcp 00:08:23.263 rmmod nvme_fabrics 00:08:23.263 rmmod nvme_keyring 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3270275 ']' 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3270275 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 3270275 ']' 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 3270275 00:08:23.263 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:23.264 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:23.264 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3270275 00:08:23.264 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:23.264 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:23.264 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3270275' 00:08:23.264 killing process with pid 3270275 00:08:23.264 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 3270275 00:08:23.264 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 3270275 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.522 18:03:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.425 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.425 00:08:25.425 real 0m21.881s 00:08:25.425 user 1m3.823s 00:08:25.425 sys 0m6.874s 00:08:25.425 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.425 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.425 ************************************ 00:08:25.425 END TEST nvmf_lvol 00:08:25.425 ************************************ 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.683 ************************************ 00:08:25.683 START TEST nvmf_lvs_grow 00:08:25.683 ************************************ 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:25.683 * Looking for test storage... 00:08:25.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:25.683 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.684 18:03:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.946 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:30.947 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:30.947 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:30.947 Found net devices under 0000:86:00.0: cvl_0_0 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:30.947 Found net devices under 0000:86:00.1: cvl_0_1 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:08:30.947 00:08:30.947 --- 10.0.0.2 ping statistics --- 00:08:30.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.947 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.947 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:30.947 00:08:30.947 --- 10.0.0.1 ping statistics --- 00:08:30.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.947 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3276471 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3276471 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 3276471 ']' 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.947 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.948 18:03:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:30.948 [2024-07-24 18:03:23.752457] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:08:30.948 [2024-07-24 18:03:23.752512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.948 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.948 [2024-07-24 18:03:23.808968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.948 [2024-07-24 18:03:23.888926] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.948 [2024-07-24 18:03:23.888961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.948 [2024-07-24 18:03:23.888968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.948 [2024-07-24 18:03:23.888973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.948 [2024-07-24 18:03:23.888978] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.948 [2024-07-24 18:03:23.888996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.515 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.515 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:31.515 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.515 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:31.515 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.515 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.515 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:31.773 [2024-07-24 18:03:24.731108] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.773 ************************************ 00:08:31.773 START TEST lvs_grow_clean 00:08:31.773 ************************************ 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.773 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.031 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.031 18:03:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.289 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:32.289 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:32.289 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:32.289 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:32.290 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:32.290 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 lvol 150 00:08:32.548 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8d393334-7783-4f41-9d0f-56915fa6d104 00:08:32.548 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.548 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.806 [2024-07-24 18:03:25.657179] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.806 [2024-07-24 18:03:25.657226] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.806 true 00:08:32.806 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:32.806 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.806 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.806 18:03:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.064 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8d393334-7783-4f41-9d0f-56915fa6d104 00:08:33.321 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:33.321 [2024-07-24 18:03:26.299123] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.322 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3276982 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3276982 /var/tmp/bdevperf.sock 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 3276982 ']' 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:33.580 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:33.580 [2024-07-24 18:03:26.496544] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:08:33.580 [2024-07-24 18:03:26.496589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276982 ] 00:08:33.580 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.580 [2024-07-24 18:03:26.550268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.580 [2024-07-24 18:03:26.628393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.868 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:33.868 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:08:33.868 18:03:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:34.161 Nvme0n1 00:08:34.161 18:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.419 [ 00:08:34.419 { 00:08:34.419 "name": "Nvme0n1", 00:08:34.419 "aliases": [ 00:08:34.419 "8d393334-7783-4f41-9d0f-56915fa6d104" 00:08:34.419 ], 00:08:34.419 "product_name": "NVMe disk", 00:08:34.419 "block_size": 4096, 00:08:34.419 "num_blocks": 38912, 00:08:34.419 "uuid": "8d393334-7783-4f41-9d0f-56915fa6d104", 00:08:34.419 "assigned_rate_limits": { 00:08:34.419 "rw_ios_per_sec": 0, 00:08:34.419 "rw_mbytes_per_sec": 0, 00:08:34.419 "r_mbytes_per_sec": 0, 00:08:34.419 "w_mbytes_per_sec": 0 00:08:34.419 }, 00:08:34.419 "claimed": false, 00:08:34.419 "zoned": false, 00:08:34.419 "supported_io_types": { 00:08:34.420 "read": true, 00:08:34.420 "write": true, 00:08:34.420 "unmap": true, 00:08:34.420 "flush": true, 00:08:34.420 "reset": true, 00:08:34.420 "nvme_admin": true, 00:08:34.420 "nvme_io": true, 00:08:34.420 "nvme_io_md": false, 00:08:34.420 "write_zeroes": true, 00:08:34.420 "zcopy": false, 00:08:34.420 "get_zone_info": false, 00:08:34.420 "zone_management": false, 00:08:34.420 "zone_append": false, 00:08:34.420 "compare": true, 00:08:34.420 "compare_and_write": true, 00:08:34.420 "abort": true, 00:08:34.420 "seek_hole": false, 00:08:34.420 "seek_data": false, 00:08:34.420 "copy": true, 00:08:34.420 "nvme_iov_md": false 00:08:34.420 }, 00:08:34.420 "memory_domains": [ 00:08:34.420 { 00:08:34.420 "dma_device_id": "system", 00:08:34.420 "dma_device_type": 1 00:08:34.420 } 00:08:34.420 ], 00:08:34.420 "driver_specific": { 00:08:34.420 "nvme": [ 00:08:34.420 { 00:08:34.420 "trid": { 00:08:34.420 "trtype": "TCP", 00:08:34.420 "adrfam": "IPv4", 00:08:34.420 "traddr": "10.0.0.2", 00:08:34.420 "trsvcid": "4420", 00:08:34.420 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.420 }, 00:08:34.420 "ctrlr_data": { 00:08:34.420 "cntlid": 1, 00:08:34.420 "vendor_id": "0x8086", 00:08:34.420 "model_number": "SPDK bdev Controller", 00:08:34.420 "serial_number": "SPDK0", 00:08:34.420 "firmware_revision": "24.09", 00:08:34.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.420 "oacs": { 00:08:34.420 "security": 0, 00:08:34.420 "format": 0, 00:08:34.420 "firmware": 0, 00:08:34.420 "ns_manage": 0 00:08:34.420 }, 00:08:34.420 "multi_ctrlr": true, 00:08:34.420 "ana_reporting": false 00:08:34.420 }, 00:08:34.420 "vs": { 00:08:34.420 "nvme_version": "1.3" 00:08:34.420 }, 00:08:34.420 "ns_data": { 00:08:34.420 "id": 1, 00:08:34.420 "can_share": true 00:08:34.420 } 00:08:34.420 } 00:08:34.420 ], 00:08:34.420 "mp_policy": "active_passive" 00:08:34.420 } 00:08:34.420 } 00:08:34.420 ] 00:08:34.420 18:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3276999 00:08:34.420 18:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.420 18:03:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.420 Running I/O for 10 seconds... 00:08:35.355 Latency(us) 00:08:35.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.355 Nvme0n1 : 1.00 23456.00 91.62 0.00 0.00 0.00 0.00 0.00 00:08:35.355 =================================================================================================================== 00:08:35.355 Total : 23456.00 91.62 0.00 0.00 0.00 0.00 0.00 00:08:35.355 00:08:36.289 18:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:36.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.547 Nvme0n1 : 2.00 23586.50 92.13 0.00 0.00 0.00 0.00 0.00 00:08:36.547 =================================================================================================================== 00:08:36.547 Total : 23586.50 92.13 0.00 0.00 0.00 0.00 0.00 00:08:36.547 00:08:36.547 true 00:08:36.547 18:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:36.547 18:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.805 18:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.805 18:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.805 18:03:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3276999 00:08:37.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.370 Nvme0n1 : 3.00 23585.33 92.13 0.00 0.00 0.00 0.00 0.00 00:08:37.370 =================================================================================================================== 00:08:37.370 Total : 23585.33 92.13 0.00 0.00 0.00 0.00 0.00 00:08:37.370 00:08:38.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.304 Nvme0n1 : 4.00 23581.50 92.12 0.00 0.00 0.00 0.00 0.00 00:08:38.304 =================================================================================================================== 00:08:38.304 Total : 23581.50 92.12 0.00 0.00 0.00 0.00 0.00 00:08:38.304 00:08:39.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.677 Nvme0n1 : 5.00 23645.00 92.36 0.00 0.00 0.00 0.00 0.00 00:08:39.677 =================================================================================================================== 00:08:39.677 Total : 23645.00 92.36 0.00 0.00 0.00 0.00 0.00 00:08:39.677 00:08:40.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.609 Nvme0n1 : 6.00 23707.50 92.61 0.00 0.00 0.00 0.00 0.00 00:08:40.609 =================================================================================================================== 00:08:40.609 Total : 23707.50 92.61 0.00 0.00 0.00 0.00 0.00 00:08:40.609 00:08:41.542 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.542 Nvme0n1 : 7.00 23753.71 92.79 0.00 0.00 0.00 0.00 0.00 00:08:41.542 =================================================================================================================== 00:08:41.542 Total : 23753.71 92.79 0.00 0.00 0.00 0.00 0.00 00:08:41.542 00:08:42.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.476 Nvme0n1 : 8.00 23782.38 92.90 0.00 0.00 0.00 0.00 0.00 00:08:42.476 =================================================================================================================== 00:08:42.476 Total : 23782.38 92.90 0.00 0.00 0.00 0.00 0.00 00:08:42.476 00:08:43.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.410 Nvme0n1 : 9.00 23797.11 92.96 0.00 0.00 0.00 0.00 0.00 00:08:43.410 =================================================================================================================== 00:08:43.410 Total : 23797.11 92.96 0.00 0.00 0.00 0.00 0.00 00:08:43.410 00:08:44.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.345 Nvme0n1 : 10.00 23809.00 93.00 0.00 0.00 0.00 0.00 0.00 00:08:44.345 =================================================================================================================== 00:08:44.345 Total : 23809.00 93.00 0.00 0.00 0.00 0.00 0.00 00:08:44.345 00:08:44.345 00:08:44.345 Latency(us) 00:08:44.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.345 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.345 Nvme0n1 : 10.00 23808.02 93.00 0.00 0.00 5372.93 2699.46 11421.99 00:08:44.345 =================================================================================================================== 00:08:44.345 Total : 23808.02 93.00 0.00 0.00 5372.93 2699.46 11421.99 00:08:44.345 0 00:08:44.345 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3276982 00:08:44.345 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 3276982 ']' 00:08:44.345 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 3276982 00:08:44.345 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:08:44.345 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:44.603 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3276982 00:08:44.603 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:44.603 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:44.603 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3276982' 00:08:44.603 killing process with pid 3276982 00:08:44.603 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 3276982 00:08:44.603 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.603 00:08:44.603 Latency(us) 00:08:44.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.603 =================================================================================================================== 00:08:44.603 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.603 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 3276982 00:08:44.603 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.861 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.119 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:45.119 18:03:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.119 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.119 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:45.119 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:45.377 [2024-07-24 18:03:38.322026] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:45.377 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:45.635 request: 00:08:45.635 { 00:08:45.635 "uuid": "7e10a9be-7f60-4d16-8a81-80e7c8d71691", 00:08:45.635 "method": "bdev_lvol_get_lvstores", 00:08:45.635 "req_id": 1 00:08:45.635 } 00:08:45.635 Got JSON-RPC error response 00:08:45.635 response: 00:08:45.635 { 00:08:45.635 "code": -19, 00:08:45.635 "message": "No such device" 00:08:45.635 } 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.635 aio_bdev 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8d393334-7783-4f41-9d0f-56915fa6d104 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=8d393334-7783-4f41-9d0f-56915fa6d104 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:45.635 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.893 18:03:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8d393334-7783-4f41-9d0f-56915fa6d104 -t 2000 00:08:46.152 [ 00:08:46.152 { 00:08:46.152 "name": "8d393334-7783-4f41-9d0f-56915fa6d104", 00:08:46.152 "aliases": [ 00:08:46.152 "lvs/lvol" 00:08:46.152 ], 00:08:46.152 "product_name": "Logical Volume", 00:08:46.152 "block_size": 4096, 00:08:46.152 "num_blocks": 38912, 00:08:46.152 "uuid": "8d393334-7783-4f41-9d0f-56915fa6d104", 00:08:46.152 "assigned_rate_limits": { 00:08:46.152 "rw_ios_per_sec": 0, 00:08:46.152 "rw_mbytes_per_sec": 0, 00:08:46.152 "r_mbytes_per_sec": 0, 00:08:46.152 "w_mbytes_per_sec": 0 00:08:46.152 }, 00:08:46.152 "claimed": false, 00:08:46.152 "zoned": false, 00:08:46.152 "supported_io_types": { 00:08:46.152 "read": true, 00:08:46.152 "write": true, 00:08:46.152 "unmap": true, 00:08:46.152 "flush": false, 00:08:46.152 "reset": true, 00:08:46.152 "nvme_admin": false, 00:08:46.152 "nvme_io": false, 00:08:46.152 "nvme_io_md": false, 00:08:46.152 "write_zeroes": true, 00:08:46.152 "zcopy": false, 00:08:46.152 "get_zone_info": false, 00:08:46.152 "zone_management": false, 00:08:46.152 "zone_append": false, 00:08:46.152 "compare": false, 00:08:46.152 "compare_and_write": false, 00:08:46.152 "abort": false, 00:08:46.152 "seek_hole": true, 00:08:46.152 "seek_data": true, 00:08:46.152 "copy": false, 00:08:46.152 "nvme_iov_md": false 00:08:46.152 }, 00:08:46.152 "driver_specific": { 00:08:46.152 "lvol": { 00:08:46.152 "lvol_store_uuid": "7e10a9be-7f60-4d16-8a81-80e7c8d71691", 00:08:46.152 "base_bdev": "aio_bdev", 00:08:46.152 "thin_provision": false, 00:08:46.152 "num_allocated_clusters": 38, 00:08:46.152 "snapshot": false, 00:08:46.152 "clone": false, 00:08:46.152 "esnap_clone": false 00:08:46.152 } 00:08:46.152 } 00:08:46.152 } 00:08:46.152 ] 00:08:46.152 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:08:46.152 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:46.152 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:46.152 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:46.152 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:46.152 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:46.410 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:46.410 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d393334-7783-4f41-9d0f-56915fa6d104 00:08:46.668 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e10a9be-7f60-4d16-8a81-80e7c8d71691 00:08:46.668 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.927 00:08:46.927 real 0m15.123s 00:08:46.927 user 0m14.732s 00:08:46.927 sys 0m1.352s 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:46.927 ************************************ 00:08:46.927 END TEST lvs_grow_clean 00:08:46.927 ************************************ 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:46.927 ************************************ 00:08:46.927 START TEST lvs_grow_dirty 00:08:46.927 ************************************ 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:46.927 18:03:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.186 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:47.186 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:47.444 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:08:47.444 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:08:47.444 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:47.444 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:47.444 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:47.444 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e lvol 150 00:08:47.702 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=254e4edb-7bff-4af6-a42e-9e09264a6b5a 00:08:47.702 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:47.702 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:47.961 [2024-07-24 18:03:40.841177] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:47.961 [2024-07-24 18:03:40.841227] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:47.961 true 00:08:47.961 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:08:47.961 18:03:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:47.961 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:47.961 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:48.219 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 254e4edb-7bff-4af6-a42e-9e09264a6b5a 00:08:48.477 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:48.477 [2024-07-24 18:03:41.539284] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.477 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3279580 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3279580 /var/tmp/bdevperf.sock 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3279580 ']' 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:48.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:48.735 18:03:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.735 [2024-07-24 18:03:41.765664] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:08:48.735 [2024-07-24 18:03:41.765711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3279580 ] 00:08:48.735 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.993 [2024-07-24 18:03:41.819840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.993 [2024-07-24 18:03:41.892473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.560 18:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.560 18:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:08:49.560 18:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:49.818 Nvme0n1 00:08:49.818 18:03:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:50.076 [ 00:08:50.076 { 00:08:50.076 "name": "Nvme0n1", 00:08:50.076 "aliases": [ 00:08:50.076 "254e4edb-7bff-4af6-a42e-9e09264a6b5a" 00:08:50.076 ], 00:08:50.076 "product_name": "NVMe disk", 00:08:50.076 "block_size": 4096, 00:08:50.076 "num_blocks": 38912, 00:08:50.076 "uuid": "254e4edb-7bff-4af6-a42e-9e09264a6b5a", 00:08:50.076 "assigned_rate_limits": { 00:08:50.076 "rw_ios_per_sec": 0, 00:08:50.076 "rw_mbytes_per_sec": 0, 00:08:50.076 "r_mbytes_per_sec": 0, 00:08:50.076 "w_mbytes_per_sec": 0 00:08:50.076 }, 00:08:50.076 "claimed": false, 00:08:50.076 "zoned": false, 00:08:50.076 "supported_io_types": { 00:08:50.076 "read": true, 00:08:50.076 "write": true, 00:08:50.076 "unmap": true, 00:08:50.076 "flush": true, 00:08:50.076 "reset": true, 00:08:50.076 "nvme_admin": true, 00:08:50.076 "nvme_io": true, 00:08:50.076 "nvme_io_md": false, 00:08:50.076 "write_zeroes": true, 00:08:50.076 "zcopy": false, 00:08:50.076 "get_zone_info": false, 00:08:50.076 "zone_management": false, 00:08:50.076 "zone_append": false, 00:08:50.076 "compare": true, 00:08:50.076 "compare_and_write": true, 00:08:50.076 "abort": true, 00:08:50.076 "seek_hole": false, 00:08:50.076 "seek_data": false, 00:08:50.076 "copy": true, 00:08:50.076 "nvme_iov_md": false 00:08:50.076 }, 00:08:50.076 "memory_domains": [ 00:08:50.076 { 00:08:50.076 "dma_device_id": "system", 00:08:50.076 "dma_device_type": 1 00:08:50.076 } 00:08:50.076 ], 00:08:50.076 "driver_specific": { 00:08:50.076 "nvme": [ 00:08:50.076 { 00:08:50.076 "trid": { 00:08:50.076 "trtype": "TCP", 00:08:50.076 "adrfam": "IPv4", 00:08:50.076 "traddr": "10.0.0.2", 00:08:50.076 "trsvcid": "4420", 00:08:50.076 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:50.076 }, 00:08:50.076 "ctrlr_data": { 00:08:50.076 "cntlid": 1, 00:08:50.076 "vendor_id": "0x8086", 00:08:50.076 "model_number": "SPDK bdev Controller", 00:08:50.076 "serial_number": "SPDK0", 00:08:50.077 "firmware_revision": "24.09", 00:08:50.077 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:50.077 "oacs": { 00:08:50.077 "security": 0, 00:08:50.077 "format": 0, 00:08:50.077 "firmware": 0, 00:08:50.077 "ns_manage": 0 00:08:50.077 }, 00:08:50.077 "multi_ctrlr": true, 00:08:50.077 "ana_reporting": false 00:08:50.077 }, 00:08:50.077 "vs": { 00:08:50.077 "nvme_version": "1.3" 00:08:50.077 }, 00:08:50.077 "ns_data": { 00:08:50.077 "id": 1, 00:08:50.077 "can_share": true 00:08:50.077 } 00:08:50.077 } 00:08:50.077 ], 00:08:50.077 "mp_policy": "active_passive" 00:08:50.077 } 00:08:50.077 } 00:08:50.077 ] 00:08:50.077 18:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3279812 00:08:50.077 18:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:50.077 18:03:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:50.077 Running I/O for 10 seconds... 00:08:51.452 Latency(us) 00:08:51.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:51.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.452 Nvme0n1 : 1.00 23639.00 92.34 0.00 0.00 0.00 0.00 0.00 00:08:51.452 =================================================================================================================== 00:08:51.452 Total : 23639.00 92.34 0.00 0.00 0.00 0.00 0.00 00:08:51.452 00:08:52.035 18:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:08:52.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.334 Nvme0n1 : 2.00 23731.00 92.70 0.00 0.00 0.00 0.00 0.00 00:08:52.334 =================================================================================================================== 00:08:52.334 Total : 23731.00 92.70 0.00 0.00 0.00 0.00 0.00 00:08:52.334 00:08:52.334 true 00:08:52.334 18:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:08:52.334 18:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:52.592 18:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:52.592 18:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:52.592 18:03:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3279812 00:08:53.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.158 Nvme0n1 : 3.00 23769.67 92.85 0.00 0.00 0.00 0.00 0.00 00:08:53.158 =================================================================================================================== 00:08:53.158 Total : 23769.67 92.85 0.00 0.00 0.00 0.00 0.00 00:08:53.158 00:08:54.091 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.091 Nvme0n1 : 4.00 23818.00 93.04 0.00 0.00 0.00 0.00 0.00 00:08:54.091 =================================================================================================================== 00:08:54.091 Total : 23818.00 93.04 0.00 0.00 0.00 0.00 0.00 00:08:54.091 00:08:55.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.464 Nvme0n1 : 5.00 23853.20 93.18 0.00 0.00 0.00 0.00 0.00 00:08:55.464 =================================================================================================================== 00:08:55.464 Total : 23853.20 93.18 0.00 0.00 0.00 0.00 0.00 00:08:55.464 00:08:56.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.399 Nvme0n1 : 6.00 23881.33 93.29 0.00 0.00 0.00 0.00 0.00 00:08:56.399 =================================================================================================================== 00:08:56.399 Total : 23881.33 93.29 0.00 0.00 0.00 0.00 0.00 00:08:56.399 00:08:57.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.334 Nvme0n1 : 7.00 23878.00 93.27 0.00 0.00 0.00 0.00 0.00 00:08:57.334 =================================================================================================================== 00:08:57.334 Total : 23878.00 93.27 0.00 0.00 0.00 0.00 0.00 00:08:57.334 00:08:58.265 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.265 Nvme0n1 : 8.00 23865.25 93.22 0.00 0.00 0.00 0.00 0.00 00:08:58.265 =================================================================================================================== 00:08:58.265 Total : 23865.25 93.22 0.00 0.00 0.00 0.00 0.00 00:08:58.265 00:08:59.198 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.198 Nvme0n1 : 9.00 23898.78 93.35 0.00 0.00 0.00 0.00 0.00 00:08:59.198 =================================================================================================================== 00:08:59.198 Total : 23898.78 93.35 0.00 0.00 0.00 0.00 0.00 00:08:59.198 00:09:00.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.129 Nvme0n1 : 10.00 23910.40 93.40 0.00 0.00 0.00 0.00 0.00 00:09:00.129 =================================================================================================================== 00:09:00.129 Total : 23910.40 93.40 0.00 0.00 0.00 0.00 0.00 00:09:00.129 00:09:00.129 00:09:00.129 Latency(us) 00:09:00.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.129 Nvme0n1 : 10.00 23914.52 93.42 0.00 0.00 5349.53 2184.53 10673.01 00:09:00.129 =================================================================================================================== 00:09:00.129 Total : 23914.52 93.42 0.00 0.00 5349.53 2184.53 10673.01 00:09:00.129 0 00:09:00.130 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3279580 00:09:00.130 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 3279580 ']' 00:09:00.130 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 3279580 00:09:00.130 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:00.130 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:00.130 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3279580 00:09:00.388 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:00.388 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:00.388 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3279580' 00:09:00.388 killing process with pid 3279580 00:09:00.388 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 3279580 00:09:00.388 Received shutdown signal, test time was about 10.000000 seconds 00:09:00.388 00:09:00.388 Latency(us) 00:09:00.388 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:00.388 =================================================================================================================== 00:09:00.388 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:00.388 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 3279580 00:09:00.388 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.645 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:00.903 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:00.903 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:00.903 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:00.903 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:00.903 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3276471 00:09:00.903 18:03:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3276471 00:09:01.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3276471 Killed "${NVMF_APP[@]}" "$@" 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3281665 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3281665 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 3281665 ']' 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.161 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:01.162 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.162 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.162 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.162 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.162 [2024-07-24 18:03:54.054704] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:01.162 [2024-07-24 18:03:54.054752] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.162 EAL: No free 2048 kB hugepages reported on node 1 00:09:01.162 [2024-07-24 18:03:54.111612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.162 [2024-07-24 18:03:54.189530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:01.162 [2024-07-24 18:03:54.189565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:01.162 [2024-07-24 18:03:54.189572] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:01.162 [2024-07-24 18:03:54.189577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:01.162 [2024-07-24 18:03:54.189582] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:01.162 [2024-07-24 18:03:54.189601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.096 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.096 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:02.096 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:02.096 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:02.096 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:02.096 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:02.096 18:03:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:02.096 [2024-07-24 18:03:55.035348] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:02.096 [2024-07-24 18:03:55.035444] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:02.096 [2024-07-24 18:03:55.035468] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 254e4edb-7bff-4af6-a42e-9e09264a6b5a 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=254e4edb-7bff-4af6-a42e-9e09264a6b5a 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:02.096 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:02.354 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 254e4edb-7bff-4af6-a42e-9e09264a6b5a -t 2000 00:09:02.354 [ 00:09:02.354 { 00:09:02.354 "name": "254e4edb-7bff-4af6-a42e-9e09264a6b5a", 00:09:02.354 "aliases": [ 00:09:02.354 "lvs/lvol" 00:09:02.354 ], 00:09:02.354 "product_name": "Logical Volume", 00:09:02.354 "block_size": 4096, 00:09:02.354 "num_blocks": 38912, 00:09:02.354 "uuid": "254e4edb-7bff-4af6-a42e-9e09264a6b5a", 00:09:02.354 "assigned_rate_limits": { 00:09:02.354 "rw_ios_per_sec": 0, 00:09:02.354 "rw_mbytes_per_sec": 0, 00:09:02.354 "r_mbytes_per_sec": 0, 00:09:02.354 "w_mbytes_per_sec": 0 00:09:02.354 }, 00:09:02.354 "claimed": false, 00:09:02.354 "zoned": false, 00:09:02.354 "supported_io_types": { 00:09:02.354 "read": true, 00:09:02.354 "write": true, 00:09:02.354 "unmap": true, 00:09:02.354 "flush": false, 00:09:02.354 "reset": true, 00:09:02.354 "nvme_admin": false, 00:09:02.354 "nvme_io": false, 00:09:02.354 "nvme_io_md": false, 00:09:02.354 "write_zeroes": true, 00:09:02.354 "zcopy": false, 00:09:02.354 "get_zone_info": false, 00:09:02.354 "zone_management": false, 00:09:02.354 "zone_append": false, 00:09:02.354 "compare": false, 00:09:02.354 "compare_and_write": false, 00:09:02.354 "abort": false, 00:09:02.354 "seek_hole": true, 00:09:02.354 "seek_data": true, 00:09:02.354 "copy": false, 00:09:02.354 "nvme_iov_md": false 00:09:02.354 }, 00:09:02.354 "driver_specific": { 00:09:02.354 "lvol": { 00:09:02.354 "lvol_store_uuid": "4f878a87-bbee-46ee-8e4f-9890a0838e4e", 00:09:02.354 "base_bdev": "aio_bdev", 00:09:02.354 "thin_provision": false, 00:09:02.354 "num_allocated_clusters": 38, 00:09:02.354 "snapshot": false, 00:09:02.354 "clone": false, 00:09:02.354 "esnap_clone": false 00:09:02.354 } 00:09:02.354 } 00:09:02.354 } 00:09:02.354 ] 00:09:02.354 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:02.354 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:02.354 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:02.612 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:02.612 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:02.612 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:02.870 [2024-07-24 18:03:55.871730] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:02.870 18:03:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:03.127 request: 00:09:03.127 { 00:09:03.127 "uuid": "4f878a87-bbee-46ee-8e4f-9890a0838e4e", 00:09:03.127 "method": "bdev_lvol_get_lvstores", 00:09:03.127 "req_id": 1 00:09:03.127 } 00:09:03.127 Got JSON-RPC error response 00:09:03.127 response: 00:09:03.127 { 00:09:03.127 "code": -19, 00:09:03.127 "message": "No such device" 00:09:03.127 } 00:09:03.127 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:03.127 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.127 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:03.127 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.127 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.385 aio_bdev 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 254e4edb-7bff-4af6-a42e-9e09264a6b5a 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=254e4edb-7bff-4af6-a42e-9e09264a6b5a 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:03.385 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 254e4edb-7bff-4af6-a42e-9e09264a6b5a -t 2000 00:09:03.643 [ 00:09:03.643 { 00:09:03.643 "name": "254e4edb-7bff-4af6-a42e-9e09264a6b5a", 00:09:03.643 "aliases": [ 00:09:03.643 "lvs/lvol" 00:09:03.643 ], 00:09:03.643 "product_name": "Logical Volume", 00:09:03.643 "block_size": 4096, 00:09:03.643 "num_blocks": 38912, 00:09:03.643 "uuid": "254e4edb-7bff-4af6-a42e-9e09264a6b5a", 00:09:03.643 "assigned_rate_limits": { 00:09:03.643 "rw_ios_per_sec": 0, 00:09:03.643 "rw_mbytes_per_sec": 0, 00:09:03.643 "r_mbytes_per_sec": 0, 00:09:03.643 "w_mbytes_per_sec": 0 00:09:03.643 }, 00:09:03.643 "claimed": false, 00:09:03.643 "zoned": false, 00:09:03.643 "supported_io_types": { 00:09:03.643 "read": true, 00:09:03.643 "write": true, 00:09:03.643 "unmap": true, 00:09:03.643 "flush": false, 00:09:03.643 "reset": true, 00:09:03.644 "nvme_admin": false, 00:09:03.644 "nvme_io": false, 00:09:03.644 "nvme_io_md": false, 00:09:03.644 "write_zeroes": true, 00:09:03.644 "zcopy": false, 00:09:03.644 "get_zone_info": false, 00:09:03.644 "zone_management": false, 00:09:03.644 "zone_append": false, 00:09:03.644 "compare": false, 00:09:03.644 "compare_and_write": false, 00:09:03.644 "abort": false, 00:09:03.644 "seek_hole": true, 00:09:03.644 "seek_data": true, 00:09:03.644 "copy": false, 00:09:03.644 "nvme_iov_md": false 00:09:03.644 }, 00:09:03.644 "driver_specific": { 00:09:03.644 "lvol": { 00:09:03.644 "lvol_store_uuid": "4f878a87-bbee-46ee-8e4f-9890a0838e4e", 00:09:03.644 "base_bdev": "aio_bdev", 00:09:03.644 "thin_provision": false, 00:09:03.644 "num_allocated_clusters": 38, 00:09:03.644 "snapshot": false, 00:09:03.644 "clone": false, 00:09:03.644 "esnap_clone": false 00:09:03.644 } 00:09:03.644 } 00:09:03.644 } 00:09:03.644 ] 00:09:03.644 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:03.644 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:03.644 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:03.902 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:03.902 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:03.902 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:03.902 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:03.902 18:03:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 254e4edb-7bff-4af6-a42e-9e09264a6b5a 00:09:04.160 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4f878a87-bbee-46ee-8e4f-9890a0838e4e 00:09:04.418 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:04.418 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:04.418 00:09:04.418 real 0m17.532s 00:09:04.418 user 0m44.432s 00:09:04.418 sys 0m3.736s 00:09:04.418 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.418 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:04.418 ************************************ 00:09:04.418 END TEST lvs_grow_dirty 00:09:04.418 ************************************ 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:04.675 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:04.675 nvmf_trace.0 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:04.676 rmmod nvme_tcp 00:09:04.676 rmmod nvme_fabrics 00:09:04.676 rmmod nvme_keyring 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3281665 ']' 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3281665 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 3281665 ']' 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 3281665 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3281665 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3281665' 00:09:04.676 killing process with pid 3281665 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 3281665 00:09:04.676 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 3281665 00:09:04.933 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.933 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.934 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.934 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.934 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.934 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.934 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.934 18:03:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.838 18:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.096 00:09:07.096 real 0m41.354s 00:09:07.096 user 1m4.703s 00:09:07.096 sys 0m9.201s 00:09:07.096 18:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:07.096 18:03:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.096 ************************************ 00:09:07.096 END TEST nvmf_lvs_grow 00:09:07.096 ************************************ 00:09:07.096 18:03:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:07.096 18:03:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:07.096 18:03:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:07.096 18:03:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.096 ************************************ 00:09:07.096 START TEST nvmf_bdev_io_wait 00:09:07.096 ************************************ 00:09:07.096 18:03:59 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:07.096 * Looking for test storage... 00:09:07.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.096 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:07.097 18:04:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:12.362 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:12.362 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:12.362 Found net devices under 0000:86:00.0: cvl_0_0 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:12.362 Found net devices under 0000:86:00.1: cvl_0_1 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:12.362 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:12.363 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:12.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:12.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:09:12.621 00:09:12.621 --- 10.0.0.2 ping statistics --- 00:09:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.621 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:12.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:12.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:09:12.621 00:09:12.621 --- 10.0.0.1 ping statistics --- 00:09:12.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:12.621 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3285713 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3285713 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 3285713 ']' 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:12.621 18:04:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:12.621 [2024-07-24 18:04:05.626401] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:12.621 [2024-07-24 18:04:05.626443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:12.621 EAL: No free 2048 kB hugepages reported on node 1 00:09:12.621 [2024-07-24 18:04:05.683578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.879 [2024-07-24 18:04:05.766116] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:12.879 [2024-07-24 18:04:05.766152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:12.879 [2024-07-24 18:04:05.766158] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:12.879 [2024-07-24 18:04:05.766164] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:12.879 [2024-07-24 18:04:05.766168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:12.879 [2024-07-24 18:04:05.766227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.879 [2024-07-24 18:04:05.766318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.879 [2024-07-24 18:04:05.766404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.879 [2024-07-24 18:04:05.766404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.449 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 [2024-07-24 18:04:06.540743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 Malloc0 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.734 [2024-07-24 18:04:06.602149] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3285964 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3285966 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.734 { 00:09:13.734 "params": { 00:09:13.734 "name": "Nvme$subsystem", 00:09:13.734 "trtype": "$TEST_TRANSPORT", 00:09:13.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.734 "adrfam": "ipv4", 00:09:13.734 "trsvcid": "$NVMF_PORT", 00:09:13.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.734 "hdgst": ${hdgst:-false}, 00:09:13.734 "ddgst": ${ddgst:-false} 00:09:13.734 }, 00:09:13.734 "method": "bdev_nvme_attach_controller" 00:09:13.734 } 00:09:13.734 EOF 00:09:13.734 )") 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3285968 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.734 { 00:09:13.734 "params": { 00:09:13.734 "name": "Nvme$subsystem", 00:09:13.734 "trtype": "$TEST_TRANSPORT", 00:09:13.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.734 "adrfam": "ipv4", 00:09:13.734 "trsvcid": "$NVMF_PORT", 00:09:13.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.734 "hdgst": ${hdgst:-false}, 00:09:13.734 "ddgst": ${ddgst:-false} 00:09:13.734 }, 00:09:13.734 "method": "bdev_nvme_attach_controller" 00:09:13.734 } 00:09:13.734 EOF 00:09:13.734 )") 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3285971 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.734 { 00:09:13.734 "params": { 00:09:13.734 "name": "Nvme$subsystem", 00:09:13.734 "trtype": "$TEST_TRANSPORT", 00:09:13.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.734 "adrfam": "ipv4", 00:09:13.734 "trsvcid": "$NVMF_PORT", 00:09:13.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.734 "hdgst": ${hdgst:-false}, 00:09:13.734 "ddgst": ${ddgst:-false} 00:09:13.734 }, 00:09:13.734 "method": "bdev_nvme_attach_controller" 00:09:13.734 } 00:09:13.734 EOF 00:09:13.734 )") 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:13.734 { 00:09:13.734 "params": { 00:09:13.734 "name": "Nvme$subsystem", 00:09:13.734 "trtype": "$TEST_TRANSPORT", 00:09:13.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:13.734 "adrfam": "ipv4", 00:09:13.734 "trsvcid": "$NVMF_PORT", 00:09:13.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:13.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:13.734 "hdgst": ${hdgst:-false}, 00:09:13.734 "ddgst": ${ddgst:-false} 00:09:13.734 }, 00:09:13.734 "method": "bdev_nvme_attach_controller" 00:09:13.734 } 00:09:13.734 EOF 00:09:13.734 )") 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3285964 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.734 "params": { 00:09:13.734 "name": "Nvme1", 00:09:13.734 "trtype": "tcp", 00:09:13.734 "traddr": "10.0.0.2", 00:09:13.734 "adrfam": "ipv4", 00:09:13.734 "trsvcid": "4420", 00:09:13.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.734 "hdgst": false, 00:09:13.734 "ddgst": false 00:09:13.734 }, 00:09:13.734 "method": "bdev_nvme_attach_controller" 00:09:13.734 }' 00:09:13.734 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:13.735 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.735 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.735 "params": { 00:09:13.735 "name": "Nvme1", 00:09:13.735 "trtype": "tcp", 00:09:13.735 "traddr": "10.0.0.2", 00:09:13.735 "adrfam": "ipv4", 00:09:13.735 "trsvcid": "4420", 00:09:13.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.735 "hdgst": false, 00:09:13.735 "ddgst": false 00:09:13.735 }, 00:09:13.735 "method": "bdev_nvme_attach_controller" 00:09:13.735 }' 00:09:13.735 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.735 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.735 "params": { 00:09:13.735 "name": "Nvme1", 00:09:13.735 "trtype": "tcp", 00:09:13.735 "traddr": "10.0.0.2", 00:09:13.735 "adrfam": "ipv4", 00:09:13.735 "trsvcid": "4420", 00:09:13.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.735 "hdgst": false, 00:09:13.735 "ddgst": false 00:09:13.735 }, 00:09:13.735 "method": "bdev_nvme_attach_controller" 00:09:13.735 }' 00:09:13.735 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:13.735 18:04:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:13.735 "params": { 00:09:13.735 "name": "Nvme1", 00:09:13.735 "trtype": "tcp", 00:09:13.735 "traddr": "10.0.0.2", 00:09:13.735 "adrfam": "ipv4", 00:09:13.735 "trsvcid": "4420", 00:09:13.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:13.735 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:13.735 "hdgst": false, 00:09:13.735 "ddgst": false 00:09:13.735 }, 00:09:13.735 "method": "bdev_nvme_attach_controller" 00:09:13.735 }' 00:09:13.735 [2024-07-24 18:04:06.652222] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:13.735 [2024-07-24 18:04:06.652223] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:13.735 [2024-07-24 18:04:06.652275] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 18:04:06.652275] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:13.735 --proc-type=auto ] 00:09:13.735 [2024-07-24 18:04:06.652847] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:13.735 [2024-07-24 18:04:06.652882] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:13.735 [2024-07-24 18:04:06.656883] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:13.735 [2024-07-24 18:04:06.656926] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:13.735 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.735 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.992 [2024-07-24 18:04:06.836701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.992 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.992 [2024-07-24 18:04:06.913803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:13.992 [2024-07-24 18:04:06.936985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.992 EAL: No free 2048 kB hugepages reported on node 1 00:09:13.992 [2024-07-24 18:04:06.996286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.992 [2024-07-24 18:04:07.015864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:13.993 [2024-07-24 18:04:07.068421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:14.249 [2024-07-24 18:04:07.090709] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.249 [2024-07-24 18:04:07.174513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:14.249 Running I/O for 1 seconds... 00:09:14.250 Running I/O for 1 seconds... 00:09:14.250 Running I/O for 1 seconds... 00:09:14.250 Running I/O for 1 seconds... 00:09:15.181 00:09:15.181 Latency(us) 00:09:15.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.181 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:15.181 Nvme1n1 : 1.01 12290.55 48.01 0.00 0.00 10376.24 6241.52 16976.94 00:09:15.181 =================================================================================================================== 00:09:15.181 Total : 12290.55 48.01 0.00 0.00 10376.24 6241.52 16976.94 00:09:15.181 00:09:15.181 Latency(us) 00:09:15.181 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.181 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:15.181 Nvme1n1 : 1.00 252993.76 988.26 0.00 0.00 503.90 205.78 628.05 00:09:15.181 =================================================================================================================== 00:09:15.181 Total : 252993.76 988.26 0.00 0.00 503.90 205.78 628.05 00:09:15.438 00:09:15.438 Latency(us) 00:09:15.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.438 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:15.438 Nvme1n1 : 1.01 11441.32 44.69 0.00 0.00 11153.35 5430.13 22469.49 00:09:15.438 =================================================================================================================== 00:09:15.439 Total : 11441.32 44.69 0.00 0.00 11153.35 5430.13 22469.49 00:09:15.439 00:09:15.439 Latency(us) 00:09:15.439 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.439 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:15.439 Nvme1n1 : 1.01 9837.43 38.43 0.00 0.00 12977.87 1872.46 19848.05 00:09:15.439 =================================================================================================================== 00:09:15.439 Total : 9837.43 38.43 0.00 0.00 12977.87 1872.46 19848.05 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3285966 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3285968 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3285971 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:15.697 rmmod nvme_tcp 00:09:15.697 rmmod nvme_fabrics 00:09:15.697 rmmod nvme_keyring 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3285713 ']' 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3285713 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 3285713 ']' 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 3285713 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3285713 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3285713' 00:09:15.697 killing process with pid 3285713 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 3285713 00:09:15.697 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 3285713 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:15.955 18:04:08 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.487 18:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:18.487 00:09:18.487 real 0m10.992s 00:09:18.487 user 0m19.442s 00:09:18.487 sys 0m5.804s 00:09:18.487 18:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:18.487 18:04:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:18.487 ************************************ 00:09:18.487 END TEST nvmf_bdev_io_wait 00:09:18.487 ************************************ 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:18.487 ************************************ 00:09:18.487 START TEST nvmf_queue_depth 00:09:18.487 ************************************ 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:18.487 * Looking for test storage... 00:09:18.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:18.487 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:18.488 18:04:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:23.753 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.753 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:23.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:23.754 Found net devices under 0000:86:00.0: cvl_0_0 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:23.754 Found net devices under 0000:86:00.1: cvl_0_1 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:23.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:09:23.754 00:09:23.754 --- 10.0.0.2 ping statistics --- 00:09:23.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.754 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:09:23.754 00:09:23.754 --- 10.0.0.1 ping statistics --- 00:09:23.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.754 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3289749 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3289749 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3289749 ']' 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:23.754 18:04:16 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:23.754 [2024-07-24 18:04:16.603778] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:23.754 [2024-07-24 18:04:16.603823] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.754 EAL: No free 2048 kB hugepages reported on node 1 00:09:23.754 [2024-07-24 18:04:16.661530] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:23.754 [2024-07-24 18:04:16.739604] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.754 [2024-07-24 18:04:16.739640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.754 [2024-07-24 18:04:16.739647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.754 [2024-07-24 18:04:16.739653] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.754 [2024-07-24 18:04:16.739658] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.754 [2024-07-24 18:04:16.739691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.320 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.320 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:24.320 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.320 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.320 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 [2024-07-24 18:04:17.438408] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 Malloc0 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 [2024-07-24 18:04:17.493870] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3289998 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3289998 /var/tmp/bdevperf.sock 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 3289998 ']' 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:24.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.578 18:04:17 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.578 [2024-07-24 18:04:17.536744] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:24.578 [2024-07-24 18:04:17.536782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3289998 ] 00:09:24.578 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.578 [2024-07-24 18:04:17.589278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.836 [2024-07-24 18:04:17.661932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.402 18:04:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:25.402 18:04:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:25.402 18:04:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:25.402 18:04:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.402 18:04:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.660 NVMe0n1 00:09:25.660 18:04:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.660 18:04:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:25.660 Running I/O for 10 seconds... 00:09:37.855 00:09:37.856 Latency(us) 00:09:37.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.856 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:37.856 Verification LBA range: start 0x0 length 0x4000 00:09:37.856 NVMe0n1 : 10.06 12563.63 49.08 0.00 0.00 81208.70 18974.23 54176.43 00:09:37.856 =================================================================================================================== 00:09:37.856 Total : 12563.63 49.08 0.00 0.00 81208.70 18974.23 54176.43 00:09:37.856 0 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3289998 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3289998 ']' 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3289998 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3289998 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3289998' 00:09:37.856 killing process with pid 3289998 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3289998 00:09:37.856 Received shutdown signal, test time was about 10.000000 seconds 00:09:37.856 00:09:37.856 Latency(us) 00:09:37.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.856 =================================================================================================================== 00:09:37.856 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3289998 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.856 18:04:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.856 rmmod nvme_tcp 00:09:37.856 rmmod nvme_fabrics 00:09:37.856 rmmod nvme_keyring 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3289749 ']' 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3289749 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 3289749 ']' 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 3289749 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3289749 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3289749' 00:09:37.856 killing process with pid 3289749 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 3289749 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 3289749 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.856 18:04:29 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:38.423 00:09:38.423 real 0m20.303s 00:09:38.423 user 0m24.931s 00:09:38.423 sys 0m5.618s 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:38.423 ************************************ 00:09:38.423 END TEST nvmf_queue_depth 00:09:38.423 ************************************ 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:38.423 ************************************ 00:09:38.423 START TEST nvmf_target_multipath 00:09:38.423 ************************************ 00:09:38.423 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:38.683 * Looking for test storage... 00:09:38.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:38.683 18:04:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:43.949 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:43.949 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:43.949 Found net devices under 0000:86:00.0: cvl_0_0 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:43.949 Found net devices under 0000:86:00.1: cvl_0_1 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.949 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.950 18:04:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.950 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:44.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:09:44.209 00:09:44.209 --- 10.0.0.2 ping statistics --- 00:09:44.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.209 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:44.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:09:44.209 00:09:44.209 --- 10.0.0.1 ping statistics --- 00:09:44.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.209 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:44.209 only one NIC for nvmf test 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.209 rmmod nvme_tcp 00:09:44.209 rmmod nvme_fabrics 00:09:44.209 rmmod nvme_keyring 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.209 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.210 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.210 18:04:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.116 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:46.405 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.406 00:09:46.406 real 0m7.800s 00:09:46.406 user 0m1.634s 00:09:46.406 sys 0m4.149s 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:46.406 ************************************ 00:09:46.406 END TEST nvmf_target_multipath 00:09:46.406 ************************************ 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.406 ************************************ 00:09:46.406 START TEST nvmf_zcopy 00:09:46.406 ************************************ 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:46.406 * Looking for test storage... 00:09:46.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.406 18:04:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:51.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:51.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:51.684 Found net devices under 0000:86:00.0: cvl_0_0 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:51.684 Found net devices under 0000:86:00.1: cvl_0_1 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:51.684 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:51.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:51.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:09:51.685 00:09:51.685 --- 10.0.0.2 ping statistics --- 00:09:51.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.685 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:51.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:51.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:09:51.685 00:09:51.685 --- 10.0.0.1 ping statistics --- 00:09:51.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:51.685 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3298649 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3298649 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 3298649 ']' 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:51.685 18:04:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:51.685 [2024-07-24 18:04:44.559015] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:51.685 [2024-07-24 18:04:44.559057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:51.685 EAL: No free 2048 kB hugepages reported on node 1 00:09:51.685 [2024-07-24 18:04:44.616947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.685 [2024-07-24 18:04:44.693999] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:51.685 [2024-07-24 18:04:44.694032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:51.685 [2024-07-24 18:04:44.694040] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:51.685 [2024-07-24 18:04:44.694046] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:51.685 [2024-07-24 18:04:44.694051] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:51.685 [2024-07-24 18:04:44.694066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.622 [2024-07-24 18:04:45.372529] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.622 [2024-07-24 18:04:45.388675] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.622 malloc0 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:52.622 { 00:09:52.622 "params": { 00:09:52.622 "name": "Nvme$subsystem", 00:09:52.622 "trtype": "$TEST_TRANSPORT", 00:09:52.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:52.622 "adrfam": "ipv4", 00:09:52.622 "trsvcid": "$NVMF_PORT", 00:09:52.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:52.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:52.622 "hdgst": ${hdgst:-false}, 00:09:52.622 "ddgst": ${ddgst:-false} 00:09:52.622 }, 00:09:52.622 "method": "bdev_nvme_attach_controller" 00:09:52.622 } 00:09:52.622 EOF 00:09:52.622 )") 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:52.622 18:04:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:52.622 "params": { 00:09:52.622 "name": "Nvme1", 00:09:52.622 "trtype": "tcp", 00:09:52.622 "traddr": "10.0.0.2", 00:09:52.622 "adrfam": "ipv4", 00:09:52.622 "trsvcid": "4420", 00:09:52.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:52.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:52.622 "hdgst": false, 00:09:52.622 "ddgst": false 00:09:52.622 }, 00:09:52.622 "method": "bdev_nvme_attach_controller" 00:09:52.622 }' 00:09:52.622 [2024-07-24 18:04:45.481348] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:09:52.622 [2024-07-24 18:04:45.481399] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3298896 ] 00:09:52.622 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.622 [2024-07-24 18:04:45.534072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.622 [2024-07-24 18:04:45.607781] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.881 Running I/O for 10 seconds... 00:10:02.856 00:10:02.856 Latency(us) 00:10:02.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:02.856 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:02.856 Verification LBA range: start 0x0 length 0x1000 00:10:02.856 Nvme1n1 : 10.01 8932.13 69.78 0.00 0.00 14289.75 1747.63 23343.30 00:10:02.856 =================================================================================================================== 00:10:02.856 Total : 8932.13 69.78 0.00 0.00 14289.75 1747.63 23343.30 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3300550 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:03.115 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:03.115 { 00:10:03.115 "params": { 00:10:03.115 "name": "Nvme$subsystem", 00:10:03.115 "trtype": "$TEST_TRANSPORT", 00:10:03.116 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:03.116 "adrfam": "ipv4", 00:10:03.116 "trsvcid": "$NVMF_PORT", 00:10:03.116 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:03.116 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:03.116 "hdgst": ${hdgst:-false}, 00:10:03.116 "ddgst": ${ddgst:-false} 00:10:03.116 }, 00:10:03.116 "method": "bdev_nvme_attach_controller" 00:10:03.116 } 00:10:03.116 EOF 00:10:03.116 )") 00:10:03.116 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:03.116 [2024-07-24 18:04:56.026485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.026521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:03.116 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:03.116 18:04:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:03.116 "params": { 00:10:03.116 "name": "Nvme1", 00:10:03.116 "trtype": "tcp", 00:10:03.116 "traddr": "10.0.0.2", 00:10:03.116 "adrfam": "ipv4", 00:10:03.116 "trsvcid": "4420", 00:10:03.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:03.116 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:03.116 "hdgst": false, 00:10:03.116 "ddgst": false 00:10:03.116 }, 00:10:03.116 "method": "bdev_nvme_attach_controller" 00:10:03.116 }' 00:10:03.116 [2024-07-24 18:04:56.034476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.034497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.042500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.042511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.048180] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:10:03.116 [2024-07-24 18:04:56.048219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3300550 ] 00:10:03.116 [2024-07-24 18:04:56.050520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.050531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.058544] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.058554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.066574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.066585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 EAL: No free 2048 kB hugepages reported on node 1 00:10:03.116 [2024-07-24 18:04:56.074597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.074607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.082618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.082627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.090640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.090650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.098660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.098670] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.102608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.116 [2024-07-24 18:04:56.106682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.106692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.114702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.114713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.122724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.122733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.130746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.130756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.138768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.138778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.146794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.146813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.154812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.154821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.162832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.162841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.170853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.170863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.178877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.178887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.180192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.116 [2024-07-24 18:04:56.186897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.186907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.116 [2024-07-24 18:04:56.194925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.116 [2024-07-24 18:04:56.194941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.202945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.202958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.210963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.210974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.218983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.218995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.227005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.227017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.239042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.239055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.247059] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.247070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.255081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.255089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.263101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.263111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.271144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.271162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.279152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.279165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.287172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.287184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.295195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.375 [2024-07-24 18:04:56.295209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.375 [2024-07-24 18:04:56.303214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.303226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.311237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.311250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.319255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.319263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.327285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.327297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.335304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.335318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 Running I/O for 5 seconds... 00:10:03.376 [2024-07-24 18:04:56.343322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.343330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.356106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.356124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.366403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.366424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.375111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.375128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.383560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.383579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.392203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.392222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.401454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.401472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.410599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.410617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.419788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.419805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.428978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.428995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.437463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.437479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.446643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.446661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.376 [2024-07-24 18:04:56.456171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.376 [2024-07-24 18:04:56.456188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.465285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.465302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.474257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.474274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.483216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.483233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.492286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.492303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.501188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.501204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.510337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.510355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.519395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.519412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.528444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.528461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.537289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.537309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.546313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.546330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.556000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.556017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.564531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.564560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.573824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.573844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.582317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.582333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.591448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.591465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.600476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.600498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.609650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.609667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.619331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.619348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.627938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.627955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.636816] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.636832] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.645345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.645362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.654023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.654039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.662905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.662922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.671564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.671582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.680106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.680123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.689287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.689304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.698515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.698531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.707862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.707883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.636 [2024-07-24 18:04:56.716825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.636 [2024-07-24 18:04:56.716842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.725729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.725746] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.735343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.735360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.744539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.744557] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.753104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.753121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.762546] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.762563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.772122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.772139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.781399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.781416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.790541] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.790559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.797284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.797301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.807948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.807966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.816958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.816977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.825331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.825349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.834412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.895 [2024-07-24 18:04:56.834429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.895 [2024-07-24 18:04:56.843620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.843638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.852724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.852742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.861288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.861305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.870551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.870569] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.879733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.879751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.888855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.888872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.898366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.898383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.905560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.905576] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.916105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.916123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.925418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.925439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.934620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.934636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.943905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.943922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.952504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.952520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.961878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.961895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.896 [2024-07-24 18:04:56.971439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:03.896 [2024-07-24 18:04:56.971456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:56.980246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:56.980263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:56.988682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:56.988699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:56.997832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:56.997849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.006870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.006887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.015823] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.015840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.025071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.025087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.034017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.034034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.042691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.042708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.051827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.051844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.061063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.061079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.155 [2024-07-24 18:04:57.070020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.155 [2024-07-24 18:04:57.070038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.078449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.078466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.087469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.087485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.096641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.096657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.105835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.105852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.114477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.114499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.124278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.124294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.133579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.133595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.142774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.142790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.152133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.152150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.161108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.161125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.170223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.170240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.179364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.179384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.188248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.188265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.197553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.197570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.204748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.204765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.214913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.214930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.223535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.223552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.156 [2024-07-24 18:04:57.232371] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.156 [2024-07-24 18:04:57.232389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.241410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.241427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.250621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.250639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.259700] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.259717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.269034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.269050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.278163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.278181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.287965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.287982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.296721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.296738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.305601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.415 [2024-07-24 18:04:57.305618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.415 [2024-07-24 18:04:57.314710] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.314727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.323308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.323327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.332407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.332424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.341454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.341472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.350695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.350713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.360005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.360034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.368697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.368715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.377801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.377818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.387010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.387028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.396088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.396106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.405033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.405051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.414140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.414158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.422602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.422619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.431890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.431907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.440798] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.440815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.449769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.449787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.458802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.458820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.467858] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.467875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.477024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.477042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.486087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.486105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.416 [2024-07-24 18:04:57.495317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.416 [2024-07-24 18:04:57.495334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.504446] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.504463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.513501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.513518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.522574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.522592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.531122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.531138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.540344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.540362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.549350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.549368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.558622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.558643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.567737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.567754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.576727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.576755] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.586332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.586350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.595426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.595444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.604390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.604407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.613319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.613335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.622627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.622645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.631718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.631736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.640834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.640851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.650359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.650377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.658856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.658873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.668073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.675 [2024-07-24 18:04:57.668091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.675 [2024-07-24 18:04:57.677883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.677901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.686512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.686529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.695179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.695196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.704687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.704706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.713194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.713211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.722191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.722208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.731228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.731250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.739735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.739752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.676 [2024-07-24 18:04:57.748883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.676 [2024-07-24 18:04:57.748900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.758052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.758070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.767476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.767498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.776813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.776830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.786173] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.786190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.795378] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.795395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.803831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.803848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.813146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.813165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.821677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.821696] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.830857] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.830875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.840067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.840085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.849114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.849131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.855922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.855938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.866711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.866728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.876014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.876031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.884727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.884744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.894243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.894260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.902839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.902859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.911911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.911929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.920966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.920983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.930024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.930041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.938449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.938466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.947298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.947315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.956303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.956320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.965813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.965831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.974514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.974546] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.984089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.984106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:57.993178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:57.993194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:58.002084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:58.002102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.935 [2024-07-24 18:04:58.011074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.935 [2024-07-24 18:04:58.011092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.193 [2024-07-24 18:04:58.019538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.193 [2024-07-24 18:04:58.019555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.193 [2024-07-24 18:04:58.029133] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.193 [2024-07-24 18:04:58.029151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.193 [2024-07-24 18:04:58.037621] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.193 [2024-07-24 18:04:58.037638] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.193 [2024-07-24 18:04:58.046470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.193 [2024-07-24 18:04:58.046488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.193 [2024-07-24 18:04:58.055834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.193 [2024-07-24 18:04:58.055851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.193 [2024-07-24 18:04:58.064905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.193 [2024-07-24 18:04:58.064921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.073989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.074010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.083023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.083040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.091905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.091923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.100950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.100966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.110016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.110033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.118894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.118911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.127814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.127831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.136873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.136889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.145474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.145496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.154988] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.155006] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.163448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.163466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.172975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.172992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.181470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.181487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.190410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.190426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.199206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.199223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.208164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.208181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.217116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.217134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.226024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.226042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.234939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.234956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.244539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.244556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.253638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.253654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.262686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.262703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.194 [2024-07-24 18:04:58.271682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.194 [2024-07-24 18:04:58.271699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.280680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.280707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.289916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.289932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.298890] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.298906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.308036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.308053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.317277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.317294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.325725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.325742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.334343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.334359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.343360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.343377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.352363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.352380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.361419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.361436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.370452] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.370469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.379584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.379600] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.388906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.388923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.398196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.398213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.407094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.407111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.416060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.416076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.425178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.425195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.434326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.434343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.443425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.443442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.452484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.452506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.461569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.461586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.471219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.471235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.479737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.479763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.489019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.489036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.497449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.497466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.506396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.506413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.514931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.514947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.524255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.524272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.452 [2024-07-24 18:04:58.533325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.452 [2024-07-24 18:04:58.533342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.710 [2024-07-24 18:04:58.542390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.710 [2024-07-24 18:04:58.542407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.710 [2024-07-24 18:04:58.551090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.710 [2024-07-24 18:04:58.551107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.710 [2024-07-24 18:04:58.560050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.560066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.569306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.569323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.578316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.578333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.587221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.587238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.596971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.596988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.606013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.606030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.615023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.615039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.624879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.624896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.633360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.633377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.642671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.642688] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.651629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.651646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.660236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.660253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.669178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.669196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.678266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.678283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.687749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.687765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.696228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.696245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.705276] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.705292] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.714385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.714403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.723267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.723284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.732135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.732154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.740462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.740479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.749320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.749337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.758503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.758521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.767400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.767418] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.776513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.776531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.711 [2024-07-24 18:04:58.785095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.711 [2024-07-24 18:04:58.785112] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.794786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.794803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.803469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.803487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.811941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.811959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.821083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.821101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.830012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.830030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.839605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.839623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.848885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.848902] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.857908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.857926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.867086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.867103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.875804] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.875822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.884873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.884891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.893879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.893897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.902620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.902637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.911712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.911729] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.920208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.920229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.929174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.929192] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.938247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.938264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.946948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.946965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.955962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.955979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.965144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.965162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.973598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.973615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.982880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.982898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:58.992000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:58.992018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:59.001151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:59.001169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:59.010159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:59.010176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:59.018608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:59.018626] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:59.027577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:59.027595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:59.036580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:59.036597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.970 [2024-07-24 18:04:59.045598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.970 [2024-07-24 18:04:59.045616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.054708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.054726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.063932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.063950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.072514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.072532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.081713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.081730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.090851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.090871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.100052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.100070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.109330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.109348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.118642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.118659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.127936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.127953] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.137232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.137250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.146181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.146198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.156037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.156054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.164795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.164812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.174018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.174035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.183483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.183506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.192626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.192643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.201840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.201857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.211408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.211424] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.220722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.220740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.229090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.229107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.238084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.238100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.247285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.247304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.256424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.256441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.265350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.265371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.274298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.274314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.282789] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.282805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.291864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.291882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.301611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.301628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.229 [2024-07-24 18:04:59.310326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.229 [2024-07-24 18:04:59.310343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.319395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.319412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.328443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.328460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.337543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.337560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.347345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.347362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.355849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.355866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.365019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.365036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.374610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.374628] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.383336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.383353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.392055] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.392073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.401203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.401220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.409792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.409809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.419642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.419659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.428043] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.428060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.436902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.436924] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.445833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.445850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.454958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.454974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.464072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.464090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.473056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.473073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.481940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.481958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.491753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.491770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.500380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.500397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.509477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.509500] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.518174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.518191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.526822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.526838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.535777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.535793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.544908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.488 [2024-07-24 18:04:59.544925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.488 [2024-07-24 18:04:59.554417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.489 [2024-07-24 18:04:59.554434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.489 [2024-07-24 18:04:59.562792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.489 [2024-07-24 18:04:59.562808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.571340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.571358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.580742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.580759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.590251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.590269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.600171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.600188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.609234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.609251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.617642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.617658] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.626645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.626662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.635701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.635718] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.644845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.644862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.653868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.653885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.752 [2024-07-24 18:04:59.662851] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.752 [2024-07-24 18:04:59.662868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.672532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.672549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.681098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.681115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.690170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.690186] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.699817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.699834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.708948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.708965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.718091] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.718108] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.726360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.726377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.735481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.735505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.744107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.744123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.753246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.753263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.762341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.762358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.771486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.771509] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.780672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.780689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.789238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.789255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.798337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.798354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.807523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.807540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.816765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.816782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.753 [2024-07-24 18:04:59.826041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.753 [2024-07-24 18:04:59.826058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.834564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.834581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.843968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.843985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.852926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.852943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.862089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.862106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.871249] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.871267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.880281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.880298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.887189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.887205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.898300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.898317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.907585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.907602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.916082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.916099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.925221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.925237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.934358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.934374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.943147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.943164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.952245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.952262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.962094] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.962111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.971090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.971106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.979545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.979561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.988471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.988488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:04:59.996971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:04:59.996989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:05:00.006264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:05:00.006282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:05:00.015354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:05:00.015372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:05:00.023848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:05:00.023866] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:05:00.033304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:05:00.033323] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:05:00.042883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.017 [2024-07-24 18:05:00.042901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.017 [2024-07-24 18:05:00.051337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.018 [2024-07-24 18:05:00.051354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.018 [2024-07-24 18:05:00.060578] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.018 [2024-07-24 18:05:00.060596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.018 [2024-07-24 18:05:00.077740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.018 [2024-07-24 18:05:00.077759] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.018 [2024-07-24 18:05:00.086441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.018 [2024-07-24 18:05:00.086459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.018 [2024-07-24 18:05:00.094912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.018 [2024-07-24 18:05:00.094929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.103966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.103985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.113383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.113401] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.121778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.121796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.130310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.130328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.139444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.139462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.148584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.148602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.157036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.157053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.165572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.165590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.174010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.174028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.182582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.182599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.191032] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.191049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.200373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.200390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.209592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.209610] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.218688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.218706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.227254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.227271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.235756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.235774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.245021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.245038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.254082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.254099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.263119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.263136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.272221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.272239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.281435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.281452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.290427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.290448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.299516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.299533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.308609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.308635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.318188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.318205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.326817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.326835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.276 [2024-07-24 18:05:00.336044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.276 [2024-07-24 18:05:00.336061] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.277 [2024-07-24 18:05:00.344603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.277 [2024-07-24 18:05:00.344621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.277 [2024-07-24 18:05:00.353040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.277 [2024-07-24 18:05:00.353057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.361987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.362005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.370952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.370969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.380130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.380147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.389286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.389303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.397744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.397761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.407368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.407385] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.416075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.416092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.425116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.425133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.434250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.434267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.443221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.443238] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.452338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.452356] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.461488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.461516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.470443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.470460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.479595] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.479612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.488108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.488126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.497317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.497334] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.506397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.506414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.515604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.515622] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.524639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.524657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.533940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.533956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.543509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.543526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.552142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.552160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.561453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.561469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.570684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.570701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.579623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.579640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.589001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.589018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.597579] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.597596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.606062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.606079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.535 [2024-07-24 18:05:00.615357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.535 [2024-07-24 18:05:00.615374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.623848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.623865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.633122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.633142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.641849] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.641867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.650952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.650970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.660102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.660119] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.669217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.669234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.678188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.678206] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.686514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.686531] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.694992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.695009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.704035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.704052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.713028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.713045] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.722051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.722067] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.731180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.731197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.740266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.740284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.749186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.749203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.758360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.758377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.767471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.767487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.777139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.777157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.786218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.786235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.794683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.794699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.804246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.804267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.813582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.813599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.822599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.822616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.831576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.831592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.840072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.794 [2024-07-24 18:05:00.840089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.794 [2024-07-24 18:05:00.849572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.795 [2024-07-24 18:05:00.849589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.795 [2024-07-24 18:05:00.857961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.795 [2024-07-24 18:05:00.857977] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.795 [2024-07-24 18:05:00.867098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.795 [2024-07-24 18:05:00.867114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.795 [2024-07-24 18:05:00.875671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.795 [2024-07-24 18:05:00.875689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.885360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.885378] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.894523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.894541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.904250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.904267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.913467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.913484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.923035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.923052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.932185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.932202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.941289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.941306] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.950124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.950141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.959287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.959305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.968550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.968568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.977746] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.977763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.986784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.986801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:00.995178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:00.995195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.004258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.004275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.013273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.013290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.022346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.022363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.031313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.031330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.040413] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.040430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.049706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.049723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.058773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.058789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.067409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.067427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.076860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.076877] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.085841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.085858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.094713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.094731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.103751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.103768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.113310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.113327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.122183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.053 [2024-07-24 18:05:01.122201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.053 [2024-07-24 18:05:01.131308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.054 [2024-07-24 18:05:01.131326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.140356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.140374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.149971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.149989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.158288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.158305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.167476] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.167499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.176576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.176593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.185468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.185485] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.194762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.194779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.204178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.204195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.213201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.213218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.222300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.222317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.232057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.232074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.240530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.240548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.250009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.250026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.258594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.258611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.267764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.267781] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.277393] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.277410] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.285922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.285939] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.294930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.294947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.304172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.304189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.313145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.313161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.321807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.321823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.330967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.330983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.340009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.340026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.349165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.349182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 00:10:08.312 Latency(us) 00:10:08.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:08.312 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:08.312 Nvme1n1 : 5.00 17085.38 133.48 0.00 0.00 7485.00 3292.40 21595.67 00:10:08.312 =================================================================================================================== 00:10:08.312 Total : 17085.38 133.48 0.00 0.00 7485.00 3292.40 21595.67 00:10:08.312 [2024-07-24 18:05:01.358020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.358036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.363640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.363653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.371660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.371672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.379682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.379693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.312 [2024-07-24 18:05:01.387712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.312 [2024-07-24 18:05:01.387731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.395724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.395736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.403747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.403758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.411766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.411777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.419786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.419798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.427808] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.427820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.435829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.435841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.443850] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.443868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.451872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.451883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.459895] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.459906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.467914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.467923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.475933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.475941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.483959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.483970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.491977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.491989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.500014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.500024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.508022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.508030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.516045] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.516056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.524065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.524078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.532086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.532098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 [2024-07-24 18:05:01.540106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.571 [2024-07-24 18:05:01.540116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3300550) - No such process 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3300550 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 delay0 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.571 18:05:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:08.571 EAL: No free 2048 kB hugepages reported on node 1 00:10:08.829 [2024-07-24 18:05:01.692628] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:15.389 [2024-07-24 18:05:07.791653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1b670 is same with the state(5) to be set 00:10:15.389 Initializing NVMe Controllers 00:10:15.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:15.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:15.389 Initialization complete. Launching workers. 00:10:15.389 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 130 00:10:15.389 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 397, failed to submit 53 00:10:15.389 success 203, unsuccess 194, failed 0 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:15.389 rmmod nvme_tcp 00:10:15.389 rmmod nvme_fabrics 00:10:15.389 rmmod nvme_keyring 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3298649 ']' 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3298649 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 3298649 ']' 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 3298649 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3298649 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3298649' 00:10:15.389 killing process with pid 3298649 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 3298649 00:10:15.389 18:05:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 3298649 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.389 18:05:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:17.328 00:10:17.328 real 0m30.827s 00:10:17.328 user 0m42.171s 00:10:17.328 sys 0m9.767s 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:17.328 ************************************ 00:10:17.328 END TEST nvmf_zcopy 00:10:17.328 ************************************ 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.328 ************************************ 00:10:17.328 START TEST nvmf_nmic 00:10:17.328 ************************************ 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:17.328 * Looking for test storage... 00:10:17.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.328 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:17.329 18:05:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:22.598 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:22.598 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:22.598 Found net devices under 0000:86:00.0: cvl_0_0 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.598 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:22.599 Found net devices under 0000:86:00.1: cvl_0_1 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:22.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:22.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:10:22.599 00:10:22.599 --- 10.0.0.2 ping statistics --- 00:10:22.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.599 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:22.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:22.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:10:22.599 00:10:22.599 --- 10.0.0.1 ping statistics --- 00:10:22.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:22.599 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:22.599 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3306082 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3306082 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 3306082 ']' 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:22.858 18:05:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:22.858 [2024-07-24 18:05:15.736682] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:10:22.858 [2024-07-24 18:05:15.736726] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.858 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.858 [2024-07-24 18:05:15.793257] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.858 [2024-07-24 18:05:15.874914] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.858 [2024-07-24 18:05:15.874949] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.858 [2024-07-24 18:05:15.874956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.858 [2024-07-24 18:05:15.874962] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.858 [2024-07-24 18:05:15.874966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.858 [2024-07-24 18:05:15.875024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.858 [2024-07-24 18:05:15.875117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.858 [2024-07-24 18:05:15.875204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.858 [2024-07-24 18:05:15.875205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.794 [2024-07-24 18:05:16.570803] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.794 Malloc0 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.794 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 [2024-07-24 18:05:16.622289] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:23.795 test case1: single bdev can't be used in multiple subsystems 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 [2024-07-24 18:05:16.646176] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:23.795 [2024-07-24 18:05:16.646194] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:23.795 [2024-07-24 18:05:16.646202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:23.795 request: 00:10:23.795 { 00:10:23.795 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:23.795 "namespace": { 00:10:23.795 "bdev_name": "Malloc0", 00:10:23.795 "no_auto_visible": false 00:10:23.795 }, 00:10:23.795 "method": "nvmf_subsystem_add_ns", 00:10:23.795 "req_id": 1 00:10:23.795 } 00:10:23.795 Got JSON-RPC error response 00:10:23.795 response: 00:10:23.795 { 00:10:23.795 "code": -32602, 00:10:23.795 "message": "Invalid parameters" 00:10:23.795 } 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:23.795 Adding namespace failed - expected result. 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:23.795 test case2: host connect to nvmf target in multiple paths 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 [2024-07-24 18:05:16.658312] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.795 18:05:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:24.730 18:05:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:26.104 18:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.104 18:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.104 18:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.104 18:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:26.104 18:05:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.004 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.004 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.004 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.004 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:28.004 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.004 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:28.004 18:05:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:28.004 [global] 00:10:28.004 thread=1 00:10:28.004 invalidate=1 00:10:28.004 rw=write 00:10:28.004 time_based=1 00:10:28.004 runtime=1 00:10:28.004 ioengine=libaio 00:10:28.004 direct=1 00:10:28.004 bs=4096 00:10:28.004 iodepth=1 00:10:28.005 norandommap=0 00:10:28.005 numjobs=1 00:10:28.005 00:10:28.005 verify_dump=1 00:10:28.005 verify_backlog=512 00:10:28.005 verify_state_save=0 00:10:28.005 do_verify=1 00:10:28.005 verify=crc32c-intel 00:10:28.005 [job0] 00:10:28.005 filename=/dev/nvme0n1 00:10:28.005 Could not set queue depth (nvme0n1) 00:10:28.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:28.262 fio-3.35 00:10:28.262 Starting 1 thread 00:10:29.634 00:10:29.634 job0: (groupid=0, jobs=1): err= 0: pid=3307154: Wed Jul 24 18:05:22 2024 00:10:29.634 read: IOPS=1004, BW=4020KiB/s (4116kB/s)(4080KiB/1015msec) 00:10:29.634 slat (nsec): min=7571, max=32346, avg=8700.67, stdev=2231.59 00:10:29.634 clat (usec): min=198, max=41889, avg=822.38, stdev=4912.65 00:10:29.634 lat (usec): min=206, max=41911, avg=831.08, stdev=4914.30 00:10:29.634 clat percentiles (usec): 00:10:29.634 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 215], 20.00th=[ 219], 00:10:29.634 | 30.00th=[ 219], 40.00th=[ 221], 50.00th=[ 223], 60.00th=[ 223], 00:10:29.634 | 70.00th=[ 225], 80.00th=[ 227], 90.00th=[ 231], 95.00th=[ 233], 00:10:29.634 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41681], 00:10:29.634 | 99.99th=[41681] 00:10:29.635 write: IOPS=1008, BW=4035KiB/s (4132kB/s)(4096KiB/1015msec); 0 zone resets 00:10:29.635 slat (nsec): min=10272, max=39812, avg=11318.49, stdev=1726.09 00:10:29.635 clat (usec): min=129, max=332, avg=145.44, stdev=10.67 00:10:29.635 lat (usec): min=139, max=371, avg=156.76, stdev=11.21 00:10:29.635 clat percentiles (usec): 00:10:29.635 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:10:29.635 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 143], 60.00th=[ 145], 00:10:29.635 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 159], 00:10:29.635 | 99.00th=[ 184], 99.50th=[ 192], 99.90th=[ 243], 99.95th=[ 334], 00:10:29.635 | 99.99th=[ 334] 00:10:29.635 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:10:29.635 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:29.635 lat (usec) : 250=99.07%, 500=0.15%, 750=0.05% 00:10:29.635 lat (msec) : 50=0.73% 00:10:29.635 cpu : usr=1.28%, sys=1.97%, ctx=2044, majf=0, minf=2 00:10:29.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:29.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.635 issued rwts: total=1020,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:29.635 00:10:29.635 Run status group 0 (all jobs): 00:10:29.635 READ: bw=4020KiB/s (4116kB/s), 4020KiB/s-4020KiB/s (4116kB/s-4116kB/s), io=4080KiB (4178kB), run=1015-1015msec 00:10:29.635 WRITE: bw=4035KiB/s (4132kB/s), 4035KiB/s-4035KiB/s (4132kB/s-4132kB/s), io=4096KiB (4194kB), run=1015-1015msec 00:10:29.635 00:10:29.635 Disk stats (read/write): 00:10:29.635 nvme0n1: ios=1067/1024, merge=0/0, ticks=734/147, in_queue=881, util=91.18% 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:29.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:29.635 rmmod nvme_tcp 00:10:29.635 rmmod nvme_fabrics 00:10:29.635 rmmod nvme_keyring 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3306082 ']' 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3306082 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 3306082 ']' 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 3306082 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3306082 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3306082' 00:10:29.635 killing process with pid 3306082 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 3306082 00:10:29.635 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 3306082 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.894 18:05:22 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.443 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:32.443 00:10:32.443 real 0m14.784s 00:10:32.443 user 0m34.903s 00:10:32.443 sys 0m4.819s 00:10:32.443 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:32.443 18:05:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:32.443 ************************************ 00:10:32.443 END TEST nvmf_nmic 00:10:32.443 ************************************ 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.443 ************************************ 00:10:32.443 START TEST nvmf_fio_target 00:10:32.443 ************************************ 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:32.443 * Looking for test storage... 00:10:32.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.443 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:32.444 18:05:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:37.716 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:37.716 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:37.716 Found net devices under 0000:86:00.0: cvl_0_0 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:37.716 Found net devices under 0000:86:00.1: cvl_0_1 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:10:37.716 00:10:37.716 --- 10.0.0.2 ping statistics --- 00:10:37.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.716 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:10:37.716 00:10:37.716 --- 10.0.0.1 ping statistics --- 00:10:37.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.716 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3310817 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3310817 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 3310817 ']' 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:37.716 18:05:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:37.716 [2024-07-24 18:05:30.559485] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:10:37.716 [2024-07-24 18:05:30.559533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.716 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.716 [2024-07-24 18:05:30.617565] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:37.716 [2024-07-24 18:05:30.697168] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.716 [2024-07-24 18:05:30.697204] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.716 [2024-07-24 18:05:30.697211] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.716 [2024-07-24 18:05:30.697219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.716 [2024-07-24 18:05:30.697224] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.716 [2024-07-24 18:05:30.697261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.716 [2024-07-24 18:05:30.697357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.716 [2024-07-24 18:05:30.697448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.716 [2024-07-24 18:05:30.697450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.647 [2024-07-24 18:05:31.580397] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.647 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:38.904 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:38.904 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.162 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:39.162 18:05:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.162 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:39.162 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.419 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:39.419 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:39.677 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.934 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:39.934 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:39.934 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:39.934 18:05:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.191 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:40.191 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:40.453 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:40.453 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.453 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.714 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:40.714 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:40.971 18:05:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.229 [2024-07-24 18:05:34.074261] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.229 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:41.229 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:41.486 18:05:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:42.912 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:42.912 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:42.912 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:42.912 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:42.912 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:42.912 18:05:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:44.815 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:44.815 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:44.815 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:44.815 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:44.815 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:44.815 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:44.815 18:05:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:44.815 [global] 00:10:44.815 thread=1 00:10:44.815 invalidate=1 00:10:44.815 rw=write 00:10:44.815 time_based=1 00:10:44.815 runtime=1 00:10:44.815 ioengine=libaio 00:10:44.815 direct=1 00:10:44.815 bs=4096 00:10:44.815 iodepth=1 00:10:44.815 norandommap=0 00:10:44.815 numjobs=1 00:10:44.815 00:10:44.815 verify_dump=1 00:10:44.815 verify_backlog=512 00:10:44.815 verify_state_save=0 00:10:44.815 do_verify=1 00:10:44.815 verify=crc32c-intel 00:10:44.815 [job0] 00:10:44.815 filename=/dev/nvme0n1 00:10:44.815 [job1] 00:10:44.815 filename=/dev/nvme0n2 00:10:44.815 [job2] 00:10:44.815 filename=/dev/nvme0n3 00:10:44.815 [job3] 00:10:44.815 filename=/dev/nvme0n4 00:10:44.815 Could not set queue depth (nvme0n1) 00:10:44.815 Could not set queue depth (nvme0n2) 00:10:44.815 Could not set queue depth (nvme0n3) 00:10:44.815 Could not set queue depth (nvme0n4) 00:10:45.074 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.074 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.074 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.074 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.074 fio-3.35 00:10:45.074 Starting 4 threads 00:10:46.477 00:10:46.477 job0: (groupid=0, jobs=1): err= 0: pid=3312245: Wed Jul 24 18:05:39 2024 00:10:46.477 read: IOPS=21, BW=86.7KiB/s (88.8kB/s)(88.0KiB/1015msec) 00:10:46.477 slat (nsec): min=10154, max=27898, avg=22123.50, stdev=2959.52 00:10:46.477 clat (usec): min=40902, max=41987, avg=41028.59, stdev=224.59 00:10:46.477 lat (usec): min=40925, max=42015, avg=41050.72, stdev=225.26 00:10:46.477 clat percentiles (usec): 00:10:46.477 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:46.478 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:46.478 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:46.478 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:46.478 | 99.99th=[42206] 00:10:46.478 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:10:46.478 slat (nsec): min=10832, max=42859, avg=12349.97, stdev=2002.36 00:10:46.478 clat (usec): min=138, max=484, avg=194.21, stdev=29.52 00:10:46.478 lat (usec): min=150, max=496, avg=206.56, stdev=29.70 00:10:46.478 clat percentiles (usec): 00:10:46.478 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:10:46.478 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 196], 00:10:46.478 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 225], 95.00th=[ 237], 00:10:46.478 | 99.00th=[ 289], 99.50th=[ 359], 99.90th=[ 486], 99.95th=[ 486], 00:10:46.478 | 99.99th=[ 486] 00:10:46.478 bw ( KiB/s): min= 4087, max= 4087, per=17.64%, avg=4087.00, stdev= 0.00, samples=1 00:10:46.478 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:46.478 lat (usec) : 250=93.26%, 500=2.62% 00:10:46.478 lat (msec) : 50=4.12% 00:10:46.478 cpu : usr=0.39%, sys=0.89%, ctx=536, majf=0, minf=2 00:10:46.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.478 job1: (groupid=0, jobs=1): err= 0: pid=3312263: Wed Jul 24 18:05:39 2024 00:10:46.478 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:46.478 slat (nsec): min=6995, max=21704, avg=8023.66, stdev=1014.47 00:10:46.478 clat (usec): min=196, max=422, avg=243.49, stdev=17.70 00:10:46.478 lat (usec): min=204, max=430, avg=251.51, stdev=17.75 00:10:46.478 clat percentiles (usec): 00:10:46.478 | 1.00th=[ 208], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 231], 00:10:46.478 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:10:46.478 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:10:46.478 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 392], 99.95th=[ 412], 00:10:46.478 | 99.99th=[ 424] 00:10:46.478 write: IOPS=2490, BW=9962KiB/s (10.2MB/s)(9972KiB/1001msec); 0 zone resets 00:10:46.478 slat (usec): min=10, max=11846, avg=16.79, stdev=237.02 00:10:46.478 clat (usec): min=124, max=595, avg=170.57, stdev=23.42 00:10:46.478 lat (usec): min=141, max=12138, avg=187.36, stdev=240.61 00:10:46.478 clat percentiles (usec): 00:10:46.478 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:46.478 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:10:46.478 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 198], 95.00th=[ 212], 00:10:46.478 | 99.00th=[ 233], 99.50th=[ 269], 99.90th=[ 371], 99.95th=[ 433], 00:10:46.478 | 99.99th=[ 594] 00:10:46.478 bw ( KiB/s): min= 8998, max= 8998, per=38.84%, avg=8998.00, stdev= 0.00, samples=1 00:10:46.478 iops : min= 2249, max= 2249, avg=2249.00, stdev= 0.00, samples=1 00:10:46.478 lat (usec) : 250=86.24%, 500=13.74%, 750=0.02% 00:10:46.478 cpu : usr=4.40%, sys=6.80%, ctx=4543, majf=0, minf=1 00:10:46.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 issued rwts: total=2048,2493,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.478 job2: (groupid=0, jobs=1): err= 0: pid=3312286: Wed Jul 24 18:05:39 2024 00:10:46.478 read: IOPS=22, BW=91.3KiB/s (93.5kB/s)(92.0KiB/1008msec) 00:10:46.478 slat (nsec): min=10132, max=24093, avg=21477.87, stdev=3518.06 00:10:46.478 clat (usec): min=238, max=41074, avg=39173.10, stdev=8488.00 00:10:46.478 lat (usec): min=260, max=41096, avg=39194.57, stdev=8487.89 00:10:46.478 clat percentiles (usec): 00:10:46.478 | 1.00th=[ 239], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:46.478 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:46.478 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:46.478 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:46.478 | 99.99th=[41157] 00:10:46.478 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:10:46.478 slat (nsec): min=10868, max=50194, avg=12896.19, stdev=2973.38 00:10:46.478 clat (usec): min=131, max=265, avg=183.03, stdev=13.71 00:10:46.478 lat (usec): min=168, max=302, avg=195.93, stdev=14.27 00:10:46.478 clat percentiles (usec): 00:10:46.478 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 174], 00:10:46.478 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:10:46.478 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:10:46.478 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 265], 99.95th=[ 265], 00:10:46.478 | 99.99th=[ 265] 00:10:46.478 bw ( KiB/s): min= 4087, max= 4087, per=17.64%, avg=4087.00, stdev= 0.00, samples=1 00:10:46.478 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:46.478 lat (usec) : 250=95.51%, 500=0.37% 00:10:46.478 lat (msec) : 50=4.11% 00:10:46.478 cpu : usr=0.60%, sys=0.79%, ctx=537, majf=0, minf=1 00:10:46.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.478 job3: (groupid=0, jobs=1): err= 0: pid=3312292: Wed Jul 24 18:05:39 2024 00:10:46.478 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:46.478 slat (nsec): min=6107, max=24925, avg=7063.82, stdev=885.80 00:10:46.478 clat (usec): min=221, max=456, avg=272.84, stdev=47.37 00:10:46.478 lat (usec): min=228, max=463, avg=279.91, stdev=47.44 00:10:46.478 clat percentiles (usec): 00:10:46.478 | 1.00th=[ 229], 5.00th=[ 237], 10.00th=[ 239], 20.00th=[ 245], 00:10:46.478 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:10:46.478 | 70.00th=[ 277], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 412], 00:10:46.478 | 99.00th=[ 433], 99.50th=[ 437], 99.90th=[ 453], 99.95th=[ 453], 00:10:46.478 | 99.99th=[ 457] 00:10:46.478 write: IOPS=2358, BW=9435KiB/s (9661kB/s)(9444KiB/1001msec); 0 zone resets 00:10:46.478 slat (nsec): min=9197, max=39978, avg=10224.37, stdev=1167.22 00:10:46.478 clat (usec): min=129, max=309, avg=166.39, stdev=19.71 00:10:46.478 lat (usec): min=139, max=349, avg=176.61, stdev=19.90 00:10:46.478 clat percentiles (usec): 00:10:46.478 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 151], 00:10:46.478 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:10:46.478 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 194], 95.00th=[ 212], 00:10:46.478 | 99.00th=[ 227], 99.50th=[ 231], 99.90th=[ 260], 99.95th=[ 262], 00:10:46.478 | 99.99th=[ 310] 00:10:46.478 bw ( KiB/s): min= 8175, max= 8175, per=35.29%, avg=8175.00, stdev= 0.00, samples=1 00:10:46.478 iops : min= 2043, max= 2043, avg=2043.00, stdev= 0.00, samples=1 00:10:46.478 lat (usec) : 250=70.72%, 500=29.28% 00:10:46.478 cpu : usr=2.30%, sys=3.90%, ctx=4409, majf=0, minf=1 00:10:46.478 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:46.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.478 issued rwts: total=2048,2361,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.478 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:46.478 00:10:46.478 Run status group 0 (all jobs): 00:10:46.478 READ: bw=15.9MiB/s (16.7MB/s), 86.7KiB/s-8184KiB/s (88.8kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1015msec 00:10:46.478 WRITE: bw=22.6MiB/s (23.7MB/s), 2018KiB/s-9962KiB/s (2066kB/s-10.2MB/s), io=23.0MiB (24.1MB), run=1001-1015msec 00:10:46.478 00:10:46.478 Disk stats (read/write): 00:10:46.478 nvme0n1: ios=44/512, merge=0/0, ticks=1684/94, in_queue=1778, util=96.89% 00:10:46.478 nvme0n2: ios=1767/2048, merge=0/0, ticks=1392/332, in_queue=1724, util=97.35% 00:10:46.478 nvme0n3: ios=43/512, merge=0/0, ticks=1724/87, in_queue=1811, util=97.17% 00:10:46.478 nvme0n4: ios=1710/2048, merge=0/0, ticks=757/325, in_queue=1082, util=94.93% 00:10:46.478 18:05:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:46.478 [global] 00:10:46.478 thread=1 00:10:46.478 invalidate=1 00:10:46.478 rw=randwrite 00:10:46.478 time_based=1 00:10:46.478 runtime=1 00:10:46.478 ioengine=libaio 00:10:46.478 direct=1 00:10:46.478 bs=4096 00:10:46.478 iodepth=1 00:10:46.478 norandommap=0 00:10:46.478 numjobs=1 00:10:46.478 00:10:46.478 verify_dump=1 00:10:46.478 verify_backlog=512 00:10:46.478 verify_state_save=0 00:10:46.478 do_verify=1 00:10:46.478 verify=crc32c-intel 00:10:46.478 [job0] 00:10:46.478 filename=/dev/nvme0n1 00:10:46.478 [job1] 00:10:46.478 filename=/dev/nvme0n2 00:10:46.478 [job2] 00:10:46.478 filename=/dev/nvme0n3 00:10:46.478 [job3] 00:10:46.478 filename=/dev/nvme0n4 00:10:46.479 Could not set queue depth (nvme0n1) 00:10:46.479 Could not set queue depth (nvme0n2) 00:10:46.479 Could not set queue depth (nvme0n3) 00:10:46.479 Could not set queue depth (nvme0n4) 00:10:46.737 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.737 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.737 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.738 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.738 fio-3.35 00:10:46.738 Starting 4 threads 00:10:48.106 00:10:48.106 job0: (groupid=0, jobs=1): err= 0: pid=3312723: Wed Jul 24 18:05:40 2024 00:10:48.106 read: IOPS=1612, BW=6450KiB/s (6604kB/s)(6456KiB/1001msec) 00:10:48.106 slat (nsec): min=7262, max=52355, avg=9022.56, stdev=2748.76 00:10:48.106 clat (usec): min=165, max=42049, avg=356.10, stdev=2048.74 00:10:48.106 lat (usec): min=211, max=42072, avg=365.12, stdev=2049.27 00:10:48.106 clat percentiles (usec): 00:10:48.106 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 239], 00:10:48.106 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 253], 00:10:48.106 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 269], 95.00th=[ 285], 00:10:48.106 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[41681], 99.95th=[42206], 00:10:48.106 | 99.99th=[42206] 00:10:48.106 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:48.106 slat (nsec): min=9004, max=61666, avg=11735.40, stdev=4220.47 00:10:48.106 clat (usec): min=108, max=379, avg=183.62, stdev=41.37 00:10:48.106 lat (usec): min=140, max=440, avg=195.36, stdev=41.25 00:10:48.106 clat percentiles (usec): 00:10:48.106 | 1.00th=[ 135], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:10:48.106 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 172], 00:10:48.106 | 70.00th=[ 190], 80.00th=[ 241], 90.00th=[ 245], 95.00th=[ 251], 00:10:48.106 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 359], 99.95th=[ 367], 00:10:48.106 | 99.99th=[ 379] 00:10:48.106 bw ( KiB/s): min= 7096, max= 7096, per=29.68%, avg=7096.00, stdev= 0.00, samples=1 00:10:48.106 iops : min= 1774, max= 1774, avg=1774.00, stdev= 0.00, samples=1 00:10:48.106 lat (usec) : 250=74.82%, 500=25.04%, 750=0.03% 00:10:48.106 lat (msec) : 50=0.11% 00:10:48.106 cpu : usr=2.20%, sys=4.50%, ctx=3663, majf=0, minf=1 00:10:48.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.106 issued rwts: total=1614,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.106 job1: (groupid=0, jobs=1): err= 0: pid=3312739: Wed Jul 24 18:05:40 2024 00:10:48.106 read: IOPS=513, BW=2054KiB/s (2104kB/s)(2112KiB/1028msec) 00:10:48.106 slat (nsec): min=6578, max=23006, avg=7716.58, stdev=2625.90 00:10:48.106 clat (usec): min=205, max=42046, avg=1523.76, stdev=7064.55 00:10:48.106 lat (usec): min=212, max=42068, avg=1531.47, stdev=7067.04 00:10:48.106 clat percentiles (usec): 00:10:48.106 | 1.00th=[ 217], 5.00th=[ 229], 10.00th=[ 233], 20.00th=[ 241], 00:10:48.106 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 260], 60.00th=[ 265], 00:10:48.106 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 449], 95.00th=[ 474], 00:10:48.106 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:48.106 | 99.99th=[42206] 00:10:48.106 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:10:48.106 slat (nsec): min=9111, max=47594, avg=10648.17, stdev=2326.37 00:10:48.106 clat (usec): min=125, max=3231, avg=199.56, stdev=109.74 00:10:48.106 lat (usec): min=136, max=3241, avg=210.21, stdev=109.84 00:10:48.106 clat percentiles (usec): 00:10:48.106 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 151], 00:10:48.106 | 30.00th=[ 157], 40.00th=[ 165], 50.00th=[ 178], 60.00th=[ 198], 00:10:48.106 | 70.00th=[ 221], 80.00th=[ 243], 90.00th=[ 273], 95.00th=[ 310], 00:10:48.106 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 416], 99.95th=[ 3228], 00:10:48.106 | 99.99th=[ 3228] 00:10:48.106 bw ( KiB/s): min= 8192, max= 8192, per=34.27%, avg=8192.00, stdev= 0.00, samples=1 00:10:48.106 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:48.106 lat (usec) : 250=66.11%, 500=32.60%, 750=0.19% 00:10:48.106 lat (msec) : 4=0.06%, 50=1.03% 00:10:48.106 cpu : usr=0.58%, sys=1.66%, ctx=1556, majf=0, minf=2 00:10:48.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.106 issued rwts: total=528,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.106 job2: (groupid=0, jobs=1): err= 0: pid=3312760: Wed Jul 24 18:05:40 2024 00:10:48.106 read: IOPS=2051, BW=8208KiB/s (8405kB/s)(8216KiB/1001msec) 00:10:48.106 slat (nsec): min=6726, max=26698, avg=7450.13, stdev=824.75 00:10:48.106 clat (usec): min=194, max=499, avg=250.90, stdev=43.16 00:10:48.106 lat (usec): min=202, max=507, avg=258.35, stdev=43.15 00:10:48.106 clat percentiles (usec): 00:10:48.106 | 1.00th=[ 206], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 227], 00:10:48.106 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 247], 00:10:48.106 | 70.00th=[ 253], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 322], 00:10:48.106 | 99.00th=[ 457], 99.50th=[ 474], 99.90th=[ 494], 99.95th=[ 498], 00:10:48.106 | 99.99th=[ 502] 00:10:48.106 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:48.106 slat (nsec): min=9319, max=36599, avg=10488.01, stdev=1347.80 00:10:48.106 clat (usec): min=123, max=347, avg=168.88, stdev=27.22 00:10:48.106 lat (usec): min=133, max=372, avg=179.37, stdev=27.40 00:10:48.106 clat percentiles (usec): 00:10:48.106 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 151], 00:10:48.106 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:48.106 | 70.00th=[ 172], 80.00th=[ 180], 90.00th=[ 210], 95.00th=[ 231], 00:10:48.106 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 326], 99.95th=[ 338], 00:10:48.106 | 99.99th=[ 347] 00:10:48.106 bw ( KiB/s): min= 8856, max= 8856, per=37.04%, avg=8856.00, stdev= 0.00, samples=1 00:10:48.106 iops : min= 2214, max= 2214, avg=2214.00, stdev= 0.00, samples=1 00:10:48.106 lat (usec) : 250=82.96%, 500=17.04% 00:10:48.106 cpu : usr=1.40%, sys=5.30%, ctx=4615, majf=0, minf=1 00:10:48.106 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.106 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.106 issued rwts: total=2054,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.106 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.106 job3: (groupid=0, jobs=1): err= 0: pid=3312765: Wed Jul 24 18:05:40 2024 00:10:48.106 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:10:48.106 slat (nsec): min=9549, max=22387, avg=20467.50, stdev=3701.75 00:10:48.106 clat (usec): min=40849, max=41237, avg=40981.32, stdev=94.13 00:10:48.106 lat (usec): min=40871, max=41247, avg=41001.78, stdev=91.34 00:10:48.106 clat percentiles (usec): 00:10:48.106 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:48.106 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:48.106 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:48.106 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:48.106 | 99.99th=[41157] 00:10:48.106 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:10:48.106 slat (nsec): min=9948, max=54999, avg=11585.08, stdev=2603.87 00:10:48.106 clat (usec): min=141, max=585, avg=189.07, stdev=35.80 00:10:48.106 lat (usec): min=152, max=597, avg=200.65, stdev=36.29 00:10:48.107 clat percentiles (usec): 00:10:48.107 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:10:48.107 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:10:48.107 | 70.00th=[ 192], 80.00th=[ 204], 90.00th=[ 241], 95.00th=[ 243], 00:10:48.107 | 99.00th=[ 253], 99.50th=[ 318], 99.90th=[ 586], 99.95th=[ 586], 00:10:48.107 | 99.99th=[ 586] 00:10:48.107 bw ( KiB/s): min= 4096, max= 4096, per=17.13%, avg=4096.00, stdev= 0.00, samples=1 00:10:48.107 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:48.107 lat (usec) : 250=94.01%, 500=1.50%, 750=0.37% 00:10:48.107 lat (msec) : 50=4.12% 00:10:48.107 cpu : usr=0.50%, sys=0.80%, ctx=534, majf=0, minf=1 00:10:48.107 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.107 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.107 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.107 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.107 00:10:48.107 Run status group 0 (all jobs): 00:10:48.107 READ: bw=16.0MiB/s (16.8MB/s), 87.4KiB/s-8208KiB/s (89.5kB/s-8405kB/s), io=16.5MiB (17.3MB), run=1001-1028msec 00:10:48.107 WRITE: bw=23.3MiB/s (24.5MB/s), 2034KiB/s-9.99MiB/s (2083kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1028msec 00:10:48.107 00:10:48.107 Disk stats (read/write): 00:10:48.107 nvme0n1: ios=1479/1536, merge=0/0, ticks=550/271, in_queue=821, util=86.87% 00:10:48.107 nvme0n2: ios=545/1024, merge=0/0, ticks=1500/198, in_queue=1698, util=89.63% 00:10:48.107 nvme0n3: ios=1878/2048, merge=0/0, ticks=1021/344, in_queue=1365, util=94.47% 00:10:48.107 nvme0n4: ios=75/512, merge=0/0, ticks=816/88, in_queue=904, util=95.58% 00:10:48.107 18:05:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:48.107 [global] 00:10:48.107 thread=1 00:10:48.107 invalidate=1 00:10:48.107 rw=write 00:10:48.107 time_based=1 00:10:48.107 runtime=1 00:10:48.107 ioengine=libaio 00:10:48.107 direct=1 00:10:48.107 bs=4096 00:10:48.107 iodepth=128 00:10:48.107 norandommap=0 00:10:48.107 numjobs=1 00:10:48.107 00:10:48.107 verify_dump=1 00:10:48.107 verify_backlog=512 00:10:48.107 verify_state_save=0 00:10:48.107 do_verify=1 00:10:48.107 verify=crc32c-intel 00:10:48.107 [job0] 00:10:48.107 filename=/dev/nvme0n1 00:10:48.107 [job1] 00:10:48.107 filename=/dev/nvme0n2 00:10:48.107 [job2] 00:10:48.107 filename=/dev/nvme0n3 00:10:48.107 [job3] 00:10:48.107 filename=/dev/nvme0n4 00:10:48.107 Could not set queue depth (nvme0n1) 00:10:48.107 Could not set queue depth (nvme0n2) 00:10:48.107 Could not set queue depth (nvme0n3) 00:10:48.107 Could not set queue depth (nvme0n4) 00:10:48.107 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.107 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.107 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.107 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:48.107 fio-3.35 00:10:48.107 Starting 4 threads 00:10:49.478 00:10:49.478 job0: (groupid=0, jobs=1): err= 0: pid=3313135: Wed Jul 24 18:05:42 2024 00:10:49.478 read: IOPS=4921, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1003msec) 00:10:49.478 slat (nsec): min=1292, max=12147k, avg=95777.51, stdev=681954.88 00:10:49.478 clat (usec): min=878, max=50900, avg=11083.99, stdev=3788.66 00:10:49.478 lat (usec): min=2113, max=50907, avg=11179.77, stdev=3867.66 00:10:49.478 clat percentiles (usec): 00:10:49.478 | 1.00th=[ 3851], 5.00th=[ 7111], 10.00th=[ 8979], 20.00th=[ 9372], 00:10:49.478 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10552], 00:10:49.478 | 70.00th=[11338], 80.00th=[12518], 90.00th=[15008], 95.00th=[16581], 00:10:49.478 | 99.00th=[25560], 99.50th=[37487], 99.90th=[47449], 99.95th=[47449], 00:10:49.478 | 99.99th=[51119] 00:10:49.478 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:49.478 slat (usec): min=2, max=8742, avg=96.99, stdev=502.44 00:10:49.478 clat (usec): min=284, max=53070, avg=14158.50, stdev=10843.44 00:10:49.478 lat (usec): min=822, max=53073, avg=14255.50, stdev=10908.96 00:10:49.478 clat percentiles (usec): 00:10:49.478 | 1.00th=[ 2638], 5.00th=[ 5014], 10.00th=[ 6652], 20.00th=[ 8848], 00:10:49.478 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[10159], 60.00th=[10159], 00:10:49.478 | 70.00th=[10421], 80.00th=[16188], 90.00th=[32637], 95.00th=[42206], 00:10:49.478 | 99.00th=[50594], 99.50th=[51119], 99.90th=[53216], 99.95th=[53216], 00:10:49.478 | 99.99th=[53216] 00:10:49.478 bw ( KiB/s): min=16384, max=24576, per=28.69%, avg=20480.00, stdev=5792.62, samples=2 00:10:49.478 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:10:49.478 lat (usec) : 500=0.01%, 1000=0.03% 00:10:49.478 lat (msec) : 2=0.27%, 4=1.82%, 10=43.65%, 20=44.60%, 50=9.07% 00:10:49.478 lat (msec) : 100=0.56% 00:10:49.478 cpu : usr=2.89%, sys=6.39%, ctx=617, majf=0, minf=1 00:10:49.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:49.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.478 issued rwts: total=4936,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.478 job1: (groupid=0, jobs=1): err= 0: pid=3313137: Wed Jul 24 18:05:42 2024 00:10:49.478 read: IOPS=3614, BW=14.1MiB/s (14.8MB/s)(14.3MiB/1011msec) 00:10:49.478 slat (nsec): min=1524, max=18713k, avg=128480.80, stdev=926602.12 00:10:49.478 clat (usec): min=4563, max=61017, avg=14686.67, stdev=9812.94 00:10:49.478 lat (usec): min=4570, max=61025, avg=14815.15, stdev=9889.93 00:10:49.478 clat percentiles (usec): 00:10:49.478 | 1.00th=[ 5866], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[ 9634], 00:10:49.478 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11863], 00:10:49.478 | 70.00th=[13960], 80.00th=[16450], 90.00th=[20841], 95.00th=[42730], 00:10:49.478 | 99.00th=[56886], 99.50th=[58983], 99.90th=[61080], 99.95th=[61080], 00:10:49.478 | 99.99th=[61080] 00:10:49.478 write: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec); 0 zone resets 00:10:49.478 slat (usec): min=2, max=45027, avg=122.80, stdev=990.24 00:10:49.478 clat (usec): min=1548, max=68901, avg=15969.95, stdev=12058.76 00:10:49.478 lat (usec): min=1568, max=71143, avg=16092.75, stdev=12156.39 00:10:49.478 clat percentiles (usec): 00:10:49.478 | 1.00th=[ 3687], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 9110], 00:10:49.478 | 30.00th=[ 9503], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[15664], 00:10:49.478 | 70.00th=[17171], 80.00th=[20841], 90.00th=[23987], 95.00th=[48497], 00:10:49.478 | 99.00th=[58983], 99.50th=[58983], 99.90th=[68682], 99.95th=[68682], 00:10:49.478 | 99.99th=[68682] 00:10:49.478 bw ( KiB/s): min=14664, max=17648, per=22.63%, avg=16156.00, stdev=2110.01, samples=2 00:10:49.478 iops : min= 3666, max= 4412, avg=4039.00, stdev=527.50, samples=2 00:10:49.478 lat (msec) : 2=0.03%, 4=0.62%, 10=32.99%, 20=48.97%, 50=13.43% 00:10:49.478 lat (msec) : 100=3.96% 00:10:49.478 cpu : usr=3.17%, sys=5.15%, ctx=425, majf=0, minf=1 00:10:49.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:49.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.478 issued rwts: total=3654,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.478 job2: (groupid=0, jobs=1): err= 0: pid=3313138: Wed Jul 24 18:05:42 2024 00:10:49.478 read: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec) 00:10:49.478 slat (nsec): min=1115, max=36126k, avg=134913.33, stdev=1058069.04 00:10:49.478 clat (usec): min=5527, max=56347, avg=16604.78, stdev=6810.15 00:10:49.478 lat (usec): min=5533, max=56455, avg=16739.69, stdev=6880.30 00:10:49.478 clat percentiles (usec): 00:10:49.478 | 1.00th=[ 7373], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[11863], 00:10:49.478 | 30.00th=[12125], 40.00th=[12518], 50.00th=[15008], 60.00th=[17957], 00:10:49.478 | 70.00th=[19006], 80.00th=[20317], 90.00th=[22938], 95.00th=[29230], 00:10:49.478 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[45351], 00:10:49.478 | 99.99th=[56361] 00:10:49.478 write: IOPS=3201, BW=12.5MiB/s (13.1MB/s)(12.7MiB/1014msec); 0 zone resets 00:10:49.478 slat (nsec): min=1922, max=13846k, avg=174731.42, stdev=891773.28 00:10:49.478 clat (usec): min=2890, max=98545, avg=23797.74, stdev=17077.80 00:10:49.478 lat (usec): min=2906, max=98557, avg=23972.47, stdev=17170.72 00:10:49.478 clat percentiles (usec): 00:10:49.478 | 1.00th=[ 5276], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[13304], 00:10:49.478 | 30.00th=[15795], 40.00th=[17171], 50.00th=[18220], 60.00th=[20841], 00:10:49.478 | 70.00th=[21365], 80.00th=[27395], 90.00th=[49546], 95.00th=[62129], 00:10:49.478 | 99.00th=[91751], 99.50th=[92799], 99.90th=[98042], 99.95th=[98042], 00:10:49.478 | 99.99th=[98042] 00:10:49.478 bw ( KiB/s): min=11016, max=13936, per=17.48%, avg=12476.00, stdev=2064.75, samples=2 00:10:49.478 iops : min= 2754, max= 3484, avg=3119.00, stdev=516.19, samples=2 00:10:49.478 lat (msec) : 4=0.32%, 10=6.63%, 20=58.64%, 50=29.38%, 100=5.03% 00:10:49.478 cpu : usr=2.47%, sys=3.55%, ctx=397, majf=0, minf=1 00:10:49.478 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:49.478 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.478 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.478 issued rwts: total=3072,3246,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.478 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.478 job3: (groupid=0, jobs=1): err= 0: pid=3313139: Wed Jul 24 18:05:42 2024 00:10:49.479 read: IOPS=5218, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1004msec) 00:10:49.479 slat (nsec): min=1094, max=9761.4k, avg=90572.62, stdev=547193.56 00:10:49.479 clat (usec): min=1370, max=57096, avg=11274.42, stdev=4721.51 00:10:49.479 lat (usec): min=3042, max=57099, avg=11365.00, stdev=4729.19 00:10:49.479 clat percentiles (usec): 00:10:49.479 | 1.00th=[ 4146], 5.00th=[ 5604], 10.00th=[ 7177], 20.00th=[ 8848], 00:10:49.479 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:10:49.479 | 70.00th=[11994], 80.00th=[12387], 90.00th=[13566], 95.00th=[15664], 00:10:49.479 | 99.00th=[20317], 99.50th=[49021], 99.90th=[56886], 99.95th=[56886], 00:10:49.479 | 99.99th=[56886] 00:10:49.479 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:10:49.479 slat (nsec): min=1951, max=11027k, avg=88240.78, stdev=457109.46 00:10:49.479 clat (usec): min=3878, max=49230, avg=11993.66, stdev=3233.22 00:10:49.479 lat (usec): min=3917, max=49246, avg=12081.90, stdev=3249.35 00:10:49.479 clat percentiles (usec): 00:10:49.479 | 1.00th=[ 6587], 5.00th=[ 8356], 10.00th=[10290], 20.00th=[10945], 00:10:49.479 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:10:49.479 | 70.00th=[11863], 80.00th=[12125], 90.00th=[13566], 95.00th=[16712], 00:10:49.479 | 99.00th=[22938], 99.50th=[38011], 99.90th=[45351], 99.95th=[49021], 00:10:49.479 | 99.99th=[49021] 00:10:49.479 bw ( KiB/s): min=21904, max=23080, per=31.51%, avg=22492.00, stdev=831.56, samples=2 00:10:49.479 iops : min= 5476, max= 5770, avg=5623.00, stdev=207.89, samples=2 00:10:49.479 lat (msec) : 2=0.01%, 4=0.40%, 10=16.92%, 20=81.01%, 50=1.43% 00:10:49.479 lat (msec) : 100=0.24% 00:10:49.479 cpu : usr=3.49%, sys=5.38%, ctx=637, majf=0, minf=1 00:10:49.479 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:49.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:49.479 issued rwts: total=5239,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.479 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:49.479 00:10:49.479 Run status group 0 (all jobs): 00:10:49.479 READ: bw=65.1MiB/s (68.3MB/s), 11.8MiB/s-20.4MiB/s (12.4MB/s-21.4MB/s), io=66.0MiB (69.2MB), run=1003-1014msec 00:10:49.479 WRITE: bw=69.7MiB/s (73.1MB/s), 12.5MiB/s-21.9MiB/s (13.1MB/s-23.0MB/s), io=70.7MiB (74.1MB), run=1003-1014msec 00:10:49.479 00:10:49.479 Disk stats (read/write): 00:10:49.479 nvme0n1: ios=4106/4096, merge=0/0, ticks=42615/61230, in_queue=103845, util=91.38% 00:10:49.479 nvme0n2: ios=3243/3584, merge=0/0, ticks=47424/49528, in_queue=96952, util=99.80% 00:10:49.479 nvme0n3: ios=2606/2679, merge=0/0, ticks=36469/50679, in_queue=87148, util=91.06% 00:10:49.479 nvme0n4: ios=4610/4608, merge=0/0, ticks=23888/20989, in_queue=44877, util=95.92% 00:10:49.479 18:05:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:49.479 [global] 00:10:49.479 thread=1 00:10:49.479 invalidate=1 00:10:49.479 rw=randwrite 00:10:49.479 time_based=1 00:10:49.479 runtime=1 00:10:49.479 ioengine=libaio 00:10:49.479 direct=1 00:10:49.479 bs=4096 00:10:49.479 iodepth=128 00:10:49.479 norandommap=0 00:10:49.479 numjobs=1 00:10:49.479 00:10:49.479 verify_dump=1 00:10:49.479 verify_backlog=512 00:10:49.479 verify_state_save=0 00:10:49.479 do_verify=1 00:10:49.479 verify=crc32c-intel 00:10:49.479 [job0] 00:10:49.479 filename=/dev/nvme0n1 00:10:49.479 [job1] 00:10:49.479 filename=/dev/nvme0n2 00:10:49.479 [job2] 00:10:49.479 filename=/dev/nvme0n3 00:10:49.479 [job3] 00:10:49.479 filename=/dev/nvme0n4 00:10:49.479 Could not set queue depth (nvme0n1) 00:10:49.479 Could not set queue depth (nvme0n2) 00:10:49.479 Could not set queue depth (nvme0n3) 00:10:49.479 Could not set queue depth (nvme0n4) 00:10:49.737 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.737 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.737 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.737 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.737 fio-3.35 00:10:49.737 Starting 4 threads 00:10:51.111 00:10:51.111 job0: (groupid=0, jobs=1): err= 0: pid=3313511: Wed Jul 24 18:05:43 2024 00:10:51.111 read: IOPS=3896, BW=15.2MiB/s (16.0MB/s)(15.3MiB/1004msec) 00:10:51.111 slat (nsec): min=1018, max=11611k, avg=133701.73, stdev=729714.84 00:10:51.111 clat (usec): min=1044, max=48556, avg=16823.50, stdev=8869.48 00:10:51.111 lat (usec): min=4969, max=49589, avg=16957.20, stdev=8923.65 00:10:51.111 clat percentiles (usec): 00:10:51.111 | 1.00th=[ 6325], 5.00th=[ 8029], 10.00th=[ 8979], 20.00th=[10290], 00:10:51.111 | 30.00th=[10945], 40.00th=[11994], 50.00th=[12780], 60.00th=[16057], 00:10:51.111 | 70.00th=[20841], 80.00th=[22414], 90.00th=[29754], 95.00th=[37487], 00:10:51.111 | 99.00th=[43779], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:10:51.111 | 99.99th=[48497] 00:10:51.111 write: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec); 0 zone resets 00:10:51.111 slat (nsec): min=1686, max=9441.3k, avg=111320.21, stdev=623158.01 00:10:51.111 clat (usec): min=4508, max=37851, avg=14925.83, stdev=6143.93 00:10:51.111 lat (usec): min=4515, max=37858, avg=15037.15, stdev=6177.39 00:10:51.111 clat percentiles (usec): 00:10:51.111 | 1.00th=[ 6325], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[10028], 00:10:51.111 | 30.00th=[10814], 40.00th=[11863], 50.00th=[12911], 60.00th=[14877], 00:10:51.111 | 70.00th=[16712], 80.00th=[19792], 90.00th=[24249], 95.00th=[28181], 00:10:51.111 | 99.00th=[33162], 99.50th=[33817], 99.90th=[34341], 99.95th=[34341], 00:10:51.111 | 99.99th=[38011] 00:10:51.111 bw ( KiB/s): min=12288, max=20439, per=22.79%, avg=16363.50, stdev=5763.63, samples=2 00:10:51.111 iops : min= 3072, max= 5109, avg=4090.50, stdev=1440.38, samples=2 00:10:51.111 lat (msec) : 2=0.01%, 10=18.87%, 20=55.99%, 50=25.12% 00:10:51.111 cpu : usr=2.19%, sys=4.49%, ctx=447, majf=0, minf=1 00:10:51.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:51.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.111 issued rwts: total=3912,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.111 job1: (groupid=0, jobs=1): err= 0: pid=3313512: Wed Jul 24 18:05:43 2024 00:10:51.111 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:10:51.111 slat (nsec): min=1050, max=33063k, avg=92957.68, stdev=779128.55 00:10:51.111 clat (usec): min=4543, max=60184, avg=12589.20, stdev=7457.56 00:10:51.111 lat (usec): min=4549, max=60207, avg=12682.16, stdev=7503.82 00:10:51.111 clat percentiles (usec): 00:10:51.111 | 1.00th=[ 4686], 5.00th=[ 5997], 10.00th=[ 7570], 20.00th=[ 8455], 00:10:51.111 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11600], 00:10:51.111 | 70.00th=[12387], 80.00th=[14091], 90.00th=[17695], 95.00th=[27132], 00:10:51.111 | 99.00th=[52691], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:10:51.111 | 99.99th=[60031] 00:10:51.111 write: IOPS=5244, BW=20.5MiB/s (21.5MB/s)(20.5MiB/1003msec); 0 zone resets 00:10:51.111 slat (usec): min=2, max=12848, avg=88.56, stdev=620.33 00:10:51.111 clat (usec): min=381, max=66475, avg=11902.03, stdev=5468.87 00:10:51.111 lat (usec): min=2254, max=66479, avg=11990.59, stdev=5499.38 00:10:51.111 clat percentiles (usec): 00:10:51.111 | 1.00th=[ 3720], 5.00th=[ 6521], 10.00th=[ 7111], 20.00th=[ 8291], 00:10:51.111 | 30.00th=[ 9503], 40.00th=[10159], 50.00th=[11076], 60.00th=[11600], 00:10:51.111 | 70.00th=[12256], 80.00th=[14091], 90.00th=[17695], 95.00th=[20841], 00:10:51.111 | 99.00th=[33162], 99.50th=[38536], 99.90th=[66323], 99.95th=[66323], 00:10:51.111 | 99.99th=[66323] 00:10:51.112 bw ( KiB/s): min=20216, max=20840, per=28.59%, avg=20528.00, stdev=441.23, samples=2 00:10:51.112 iops : min= 5054, max= 5210, avg=5132.00, stdev=110.31, samples=2 00:10:51.112 lat (usec) : 500=0.01%, 1000=0.01% 00:10:51.112 lat (msec) : 4=0.89%, 10=39.55%, 20=51.94%, 50=6.79%, 100=0.82% 00:10:51.112 cpu : usr=3.39%, sys=5.49%, ctx=391, majf=0, minf=1 00:10:51.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:51.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.112 issued rwts: total=5120,5260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.112 job2: (groupid=0, jobs=1): err= 0: pid=3313513: Wed Jul 24 18:05:43 2024 00:10:51.112 read: IOPS=4615, BW=18.0MiB/s (18.9MB/s)(18.1MiB/1003msec) 00:10:51.112 slat (nsec): min=1246, max=14135k, avg=88433.16, stdev=705279.92 00:10:51.112 clat (usec): min=2034, max=34346, avg=12587.46, stdev=3877.12 00:10:51.112 lat (usec): min=2161, max=34353, avg=12675.89, stdev=3925.95 00:10:51.112 clat percentiles (usec): 00:10:51.112 | 1.00th=[ 4817], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[ 9634], 00:10:51.112 | 30.00th=[10814], 40.00th=[11338], 50.00th=[11731], 60.00th=[12649], 00:10:51.112 | 70.00th=[13698], 80.00th=[15008], 90.00th=[17957], 95.00th=[19792], 00:10:51.112 | 99.00th=[24511], 99.50th=[28705], 99.90th=[34341], 99.95th=[34341], 00:10:51.112 | 99.99th=[34341] 00:10:51.112 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:10:51.112 slat (nsec): min=1764, max=14850k, avg=76245.29, stdev=537607.19 00:10:51.112 clat (usec): min=542, max=56037, avg=13513.50, stdev=9253.37 00:10:51.112 lat (usec): min=551, max=56039, avg=13589.74, stdev=9301.36 00:10:51.112 clat percentiles (usec): 00:10:51.112 | 1.00th=[ 2024], 5.00th=[ 4424], 10.00th=[ 5866], 20.00th=[ 7242], 00:10:51.112 | 30.00th=[ 8586], 40.00th=[10421], 50.00th=[11076], 60.00th=[11731], 00:10:51.112 | 70.00th=[13698], 80.00th=[18220], 90.00th=[22938], 95.00th=[34341], 00:10:51.112 | 99.00th=[52167], 99.50th=[53740], 99.90th=[54789], 99.95th=[54789], 00:10:51.112 | 99.99th=[55837] 00:10:51.112 bw ( KiB/s): min=18784, max=21328, per=27.93%, avg=20056.00, stdev=1798.88, samples=2 00:10:51.112 iops : min= 4696, max= 5332, avg=5014.00, stdev=449.72, samples=2 00:10:51.112 lat (usec) : 750=0.13%, 1000=0.05% 00:10:51.112 lat (msec) : 2=0.32%, 4=2.25%, 10=25.67%, 20=62.28%, 50=8.52% 00:10:51.112 lat (msec) : 100=0.77% 00:10:51.112 cpu : usr=3.69%, sys=4.99%, ctx=415, majf=0, minf=1 00:10:51.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:51.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.112 issued rwts: total=4629,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.112 job3: (groupid=0, jobs=1): err= 0: pid=3313514: Wed Jul 24 18:05:43 2024 00:10:51.112 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:10:51.112 slat (nsec): min=1025, max=43244k, avg=155907.77, stdev=1155313.61 00:10:51.112 clat (usec): min=5549, max=83553, avg=21355.28, stdev=14675.13 00:10:51.112 lat (usec): min=5554, max=83563, avg=21511.19, stdev=14740.28 00:10:51.112 clat percentiles (usec): 00:10:51.112 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[11076], 20.00th=[11994], 00:10:51.112 | 30.00th=[12518], 40.00th=[13042], 50.00th=[14222], 60.00th=[16909], 00:10:51.112 | 70.00th=[23725], 80.00th=[29230], 90.00th=[40633], 95.00th=[55313], 00:10:51.112 | 99.00th=[76022], 99.50th=[76022], 99.90th=[83362], 99.95th=[83362], 00:10:51.112 | 99.99th=[83362] 00:10:51.112 write: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(13.8MiB/1003msec); 0 zone resets 00:10:51.112 slat (nsec): min=1718, max=21907k, avg=142058.34, stdev=1018224.71 00:10:51.112 clat (usec): min=2119, max=61023, avg=16950.39, stdev=7267.91 00:10:51.112 lat (usec): min=5428, max=61401, avg=17092.45, stdev=7321.99 00:10:51.112 clat percentiles (usec): 00:10:51.112 | 1.00th=[ 5538], 5.00th=[ 9241], 10.00th=[11076], 20.00th=[11731], 00:10:51.112 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13304], 60.00th=[15795], 00:10:51.112 | 70.00th=[20579], 80.00th=[25035], 90.00th=[27395], 95.00th=[28705], 00:10:51.112 | 99.00th=[33162], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:10:51.112 | 99.99th=[61080] 00:10:51.112 bw ( KiB/s): min=12928, max=14416, per=19.04%, avg=13672.00, stdev=1052.17, samples=2 00:10:51.112 iops : min= 3232, max= 3604, avg=3418.00, stdev=263.04, samples=2 00:10:51.112 lat (msec) : 4=0.02%, 10=6.18%, 20=62.32%, 50=27.22%, 100=4.26% 00:10:51.112 cpu : usr=2.69%, sys=2.89%, ctx=282, majf=0, minf=1 00:10:51.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:51.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.112 issued rwts: total=3072,3545,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.112 00:10:51.112 Run status group 0 (all jobs): 00:10:51.112 READ: bw=65.1MiB/s (68.3MB/s), 12.0MiB/s-19.9MiB/s (12.5MB/s-20.9MB/s), io=65.4MiB (68.5MB), run=1003-1004msec 00:10:51.112 WRITE: bw=70.1MiB/s (73.5MB/s), 13.8MiB/s-20.5MiB/s (14.5MB/s-21.5MB/s), io=70.4MiB (73.8MB), run=1003-1004msec 00:10:51.112 00:10:51.112 Disk stats (read/write): 00:10:51.112 nvme0n1: ios=3332/3584, merge=0/0, ticks=14218/14513, in_queue=28731, util=81.95% 00:10:51.112 nvme0n2: ios=3662/4096, merge=0/0, ticks=30673/31186, in_queue=61859, util=89.40% 00:10:51.112 nvme0n3: ios=3767/4096, merge=0/0, ticks=38951/47478, in_queue=86429, util=90.15% 00:10:51.112 nvme0n4: ios=2713/3072, merge=0/0, ticks=14206/17732, in_queue=31938, util=94.93% 00:10:51.112 18:05:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:51.112 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3313746 00:10:51.112 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:51.112 18:05:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:51.112 [global] 00:10:51.112 thread=1 00:10:51.112 invalidate=1 00:10:51.112 rw=read 00:10:51.112 time_based=1 00:10:51.112 runtime=10 00:10:51.112 ioengine=libaio 00:10:51.112 direct=1 00:10:51.112 bs=4096 00:10:51.112 iodepth=1 00:10:51.112 norandommap=1 00:10:51.112 numjobs=1 00:10:51.112 00:10:51.112 [job0] 00:10:51.112 filename=/dev/nvme0n1 00:10:51.112 [job1] 00:10:51.112 filename=/dev/nvme0n2 00:10:51.112 [job2] 00:10:51.112 filename=/dev/nvme0n3 00:10:51.112 [job3] 00:10:51.112 filename=/dev/nvme0n4 00:10:51.112 Could not set queue depth (nvme0n1) 00:10:51.112 Could not set queue depth (nvme0n2) 00:10:51.112 Could not set queue depth (nvme0n3) 00:10:51.112 Could not set queue depth (nvme0n4) 00:10:51.370 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.370 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.370 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.370 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:51.370 fio-3.35 00:10:51.370 Starting 4 threads 00:10:54.651 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:54.651 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=15040512, buflen=4096 00:10:54.651 fio: pid=3313888, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:54.651 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:54.651 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=5697536, buflen=4096 00:10:54.651 fio: pid=3313887, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:54.651 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.651 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:54.651 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.651 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:54.651 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7356416, buflen=4096 00:10:54.651 fio: pid=3313884, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:54.909 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.909 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:54.909 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=323584, buflen=4096 00:10:54.909 fio: pid=3313885, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:54.909 00:10:54.909 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3313884: Wed Jul 24 18:05:47 2024 00:10:54.909 read: IOPS=590, BW=2362KiB/s (2419kB/s)(7184KiB/3041msec) 00:10:54.909 slat (usec): min=5, max=8763, avg=16.92, stdev=276.77 00:10:54.909 clat (usec): min=183, max=42262, avg=1663.09, stdev=7535.62 00:10:54.909 lat (usec): min=190, max=51026, avg=1680.01, stdev=7592.23 00:10:54.909 clat percentiles (usec): 00:10:54.909 | 1.00th=[ 188], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:10:54.909 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:54.909 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 375], 95.00th=[ 388], 00:10:54.909 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:54.909 | 99.99th=[42206] 00:10:54.909 bw ( KiB/s): min= 96, max= 7392, per=28.95%, avg=2492.80, stdev=3368.58, samples=5 00:10:54.909 iops : min= 24, max= 1848, avg=623.20, stdev=842.15, samples=5 00:10:54.909 lat (usec) : 250=83.31%, 500=13.08%, 750=0.06% 00:10:54.909 lat (msec) : 50=3.51% 00:10:54.909 cpu : usr=0.10%, sys=0.62%, ctx=1799, majf=0, minf=1 00:10:54.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.909 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.909 issued rwts: total=1797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.909 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3313885: Wed Jul 24 18:05:47 2024 00:10:54.909 read: IOPS=24, BW=98.0KiB/s (100kB/s)(316KiB/3224msec) 00:10:54.909 slat (usec): min=11, max=2852, avg=57.98, stdev=316.61 00:10:54.909 clat (usec): min=556, max=42063, avg=40476.91, stdev=4550.98 00:10:54.909 lat (usec): min=588, max=43989, avg=40535.35, stdev=4566.39 00:10:54.909 clat percentiles (usec): 00:10:54.909 | 1.00th=[ 553], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:54.909 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:54.909 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:54.909 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:54.909 | 99.99th=[42206] 00:10:54.909 bw ( KiB/s): min= 93, max= 104, per=1.14%, avg=98.17, stdev= 4.67, samples=6 00:10:54.909 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:10:54.909 lat (usec) : 750=1.25% 00:10:54.909 lat (msec) : 50=97.50% 00:10:54.909 cpu : usr=0.12%, sys=0.00%, ctx=82, majf=0, minf=1 00:10:54.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.909 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.909 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.909 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3313887: Wed Jul 24 18:05:47 2024 00:10:54.909 read: IOPS=489, BW=1956KiB/s (2003kB/s)(5564KiB/2844msec) 00:10:54.909 slat (usec): min=7, max=7561, avg=19.14, stdev=277.14 00:10:54.909 clat (usec): min=185, max=41261, avg=2005.35, stdev=8300.08 00:10:54.909 lat (usec): min=192, max=41283, avg=2024.50, stdev=8305.61 00:10:54.909 clat percentiles (usec): 00:10:54.909 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:10:54.909 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 235], 00:10:54.909 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 322], 00:10:54.909 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.909 | 99.99th=[41157] 00:10:54.909 bw ( KiB/s): min= 96, max= 648, per=2.42%, avg=208.00, stdev=245.99, samples=5 00:10:54.909 iops : min= 24, max= 162, avg=52.00, stdev=61.50, samples=5 00:10:54.909 lat (usec) : 250=71.19%, 500=24.21%, 750=0.07% 00:10:54.909 lat (msec) : 2=0.07%, 50=4.38% 00:10:54.909 cpu : usr=0.32%, sys=0.81%, ctx=1394, majf=0, minf=1 00:10:54.909 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.909 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.909 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.909 issued rwts: total=1392,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.909 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.909 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3313888: Wed Jul 24 18:05:47 2024 00:10:54.909 read: IOPS=1385, BW=5541KiB/s (5674kB/s)(14.3MiB/2651msec) 00:10:54.909 slat (nsec): min=6980, max=42945, avg=8257.44, stdev=2228.29 00:10:54.910 clat (usec): min=186, max=41240, avg=705.97, stdev=4386.76 00:10:54.910 lat (usec): min=194, max=41252, avg=714.23, stdev=4388.00 00:10:54.910 clat percentiles (usec): 00:10:54.910 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 208], 00:10:54.910 | 30.00th=[ 212], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 221], 00:10:54.910 | 70.00th=[ 225], 80.00th=[ 235], 90.00th=[ 262], 95.00th=[ 371], 00:10:54.910 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:54.910 | 99.99th=[41157] 00:10:54.910 bw ( KiB/s): min= 96, max=14264, per=60.97%, avg=5248.00, stdev=6032.85, samples=5 00:10:54.910 iops : min= 24, max= 3566, avg=1312.00, stdev=1508.21, samples=5 00:10:54.910 lat (usec) : 250=86.06%, 500=12.74% 00:10:54.910 lat (msec) : 50=1.17% 00:10:54.910 cpu : usr=0.64%, sys=2.38%, ctx=3674, majf=0, minf=2 00:10:54.910 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:54.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.910 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:54.910 issued rwts: total=3673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:54.910 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:54.910 00:10:54.910 Run status group 0 (all jobs): 00:10:54.910 READ: bw=8608KiB/s (8815kB/s), 98.0KiB/s-5541KiB/s (100kB/s-5674kB/s), io=27.1MiB (28.4MB), run=2651-3224msec 00:10:54.910 00:10:54.910 Disk stats (read/write): 00:10:54.910 nvme0n1: ios=1791/0, merge=0/0, ticks=2778/0, in_queue=2778, util=93.62% 00:10:54.910 nvme0n2: ios=75/0, merge=0/0, ticks=3036/0, in_queue=3036, util=94.97% 00:10:54.910 nvme0n3: ios=1349/0, merge=0/0, ticks=2770/0, in_queue=2770, util=95.72% 00:10:54.910 nvme0n4: ios=3488/0, merge=0/0, ticks=2487/0, in_queue=2487, util=96.42% 00:10:54.910 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:54.910 18:05:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:55.168 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.168 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:55.426 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.426 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:55.683 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.683 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:55.683 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:55.683 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3313746 00:10:55.683 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:55.683 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:55.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:55.954 nvmf hotplug test: fio failed as expected 00:10:55.954 18:05:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.954 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.954 rmmod nvme_tcp 00:10:56.229 rmmod nvme_fabrics 00:10:56.229 rmmod nvme_keyring 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3310817 ']' 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3310817 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 3310817 ']' 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 3310817 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3310817 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3310817' 00:10:56.229 killing process with pid 3310817 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 3310817 00:10:56.229 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 3310817 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.488 18:05:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.391 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:58.391 00:10:58.391 real 0m26.352s 00:10:58.391 user 1m47.083s 00:10:58.391 sys 0m7.801s 00:10:58.391 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:58.391 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:58.392 ************************************ 00:10:58.392 END TEST nvmf_fio_target 00:10:58.392 ************************************ 00:10:58.392 18:05:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:58.392 18:05:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:58.392 18:05:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:58.392 18:05:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:58.392 ************************************ 00:10:58.392 START TEST nvmf_bdevio 00:10:58.392 ************************************ 00:10:58.392 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:58.650 * Looking for test storage... 00:10:58.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:58.650 18:05:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:03.909 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:03.910 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:03.910 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:03.910 Found net devices under 0000:86:00.0: cvl_0_0 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:03.910 Found net devices under 0000:86:00.1: cvl_0_1 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:03.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:03.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:11:03.910 00:11:03.910 --- 10.0.0.2 ping statistics --- 00:11:03.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.910 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:03.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:03.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:11:03.910 00:11:03.910 --- 10.0.0.1 ping statistics --- 00:11:03.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:03.910 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3318115 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3318115 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 3318115 ']' 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:03.910 18:05:56 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:03.910 [2024-07-24 18:05:56.665271] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:11:03.910 [2024-07-24 18:05:56.665314] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.910 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.910 [2024-07-24 18:05:56.721935] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.910 [2024-07-24 18:05:56.800896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.910 [2024-07-24 18:05:56.800933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.910 [2024-07-24 18:05:56.800940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.910 [2024-07-24 18:05:56.800945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.910 [2024-07-24 18:05:56.800950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.910 [2024-07-24 18:05:56.801075] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:03.910 [2024-07-24 18:05:56.801181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:03.910 [2024-07-24 18:05:56.801286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:03.910 [2024-07-24 18:05:56.801288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 [2024-07-24 18:05:57.498692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 Malloc0 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:04.476 [2024-07-24 18:05:57.545894] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:04.476 { 00:11:04.476 "params": { 00:11:04.476 "name": "Nvme$subsystem", 00:11:04.476 "trtype": "$TEST_TRANSPORT", 00:11:04.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:04.476 "adrfam": "ipv4", 00:11:04.476 "trsvcid": "$NVMF_PORT", 00:11:04.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:04.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:04.476 "hdgst": ${hdgst:-false}, 00:11:04.476 "ddgst": ${ddgst:-false} 00:11:04.476 }, 00:11:04.476 "method": "bdev_nvme_attach_controller" 00:11:04.476 } 00:11:04.476 EOF 00:11:04.476 )") 00:11:04.476 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:04.734 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:04.734 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:04.734 18:05:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:04.734 "params": { 00:11:04.734 "name": "Nvme1", 00:11:04.734 "trtype": "tcp", 00:11:04.734 "traddr": "10.0.0.2", 00:11:04.734 "adrfam": "ipv4", 00:11:04.734 "trsvcid": "4420", 00:11:04.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:04.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:04.734 "hdgst": false, 00:11:04.734 "ddgst": false 00:11:04.734 }, 00:11:04.734 "method": "bdev_nvme_attach_controller" 00:11:04.734 }' 00:11:04.734 [2024-07-24 18:05:57.577741] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:11:04.734 [2024-07-24 18:05:57.577785] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3318187 ] 00:11:04.734 EAL: No free 2048 kB hugepages reported on node 1 00:11:04.734 [2024-07-24 18:05:57.632249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:04.734 [2024-07-24 18:05:57.708611] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:04.734 [2024-07-24 18:05:57.708707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:04.734 [2024-07-24 18:05:57.708709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.991 I/O targets: 00:11:04.991 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:04.991 00:11:04.991 00:11:04.991 CUnit - A unit testing framework for C - Version 2.1-3 00:11:04.991 http://cunit.sourceforge.net/ 00:11:04.991 00:11:04.991 00:11:04.991 Suite: bdevio tests on: Nvme1n1 00:11:04.991 Test: blockdev write read block ...passed 00:11:05.250 Test: blockdev write zeroes read block ...passed 00:11:05.250 Test: blockdev write zeroes read no split ...passed 00:11:05.250 Test: blockdev write zeroes read split ...passed 00:11:05.250 Test: blockdev write zeroes read split partial ...passed 00:11:05.250 Test: blockdev reset ...[2024-07-24 18:05:58.145181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:05.250 [2024-07-24 18:05:58.145239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a8f6d0 (9): Bad file descriptor 00:11:05.250 [2024-07-24 18:05:58.157791] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:05.250 passed 00:11:05.250 Test: blockdev write read 8 blocks ...passed 00:11:05.250 Test: blockdev write read size > 128k ...passed 00:11:05.250 Test: blockdev write read invalid size ...passed 00:11:05.250 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:05.250 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:05.250 Test: blockdev write read max offset ...passed 00:11:05.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:05.561 Test: blockdev writev readv 8 blocks ...passed 00:11:05.561 Test: blockdev writev readv 30 x 1block ...passed 00:11:05.561 Test: blockdev writev readv block ...passed 00:11:05.561 Test: blockdev writev readv size > 128k ...passed 00:11:05.561 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:05.561 Test: blockdev comparev and writev ...[2024-07-24 18:05:58.413203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.413229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.413242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.413249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.413504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.413514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.413526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.413533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.413774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.413784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.413795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.413801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.414052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.414062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.414073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:05.561 [2024-07-24 18:05:58.414080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:05.561 passed 00:11:05.561 Test: blockdev nvme passthru rw ...passed 00:11:05.561 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:05:58.496854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.561 [2024-07-24 18:05:58.496870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.496989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.561 [2024-07-24 18:05:58.496998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.497110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.561 [2024-07-24 18:05:58.497119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:05.561 [2024-07-24 18:05:58.497225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:05.561 [2024-07-24 18:05:58.497234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:05.561 passed 00:11:05.561 Test: blockdev nvme admin passthru ...passed 00:11:05.561 Test: blockdev copy ...passed 00:11:05.561 00:11:05.561 Run Summary: Type Total Ran Passed Failed Inactive 00:11:05.561 suites 1 1 n/a 0 0 00:11:05.561 tests 23 23 23 0 0 00:11:05.561 asserts 152 152 152 0 n/a 00:11:05.561 00:11:05.561 Elapsed time = 1.141 seconds 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:05.827 rmmod nvme_tcp 00:11:05.827 rmmod nvme_fabrics 00:11:05.827 rmmod nvme_keyring 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3318115 ']' 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3318115 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 3318115 ']' 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 3318115 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3318115 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3318115' 00:11:05.827 killing process with pid 3318115 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 3318115 00:11:05.827 18:05:58 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 3318115 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:06.084 18:05:59 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:08.614 00:11:08.614 real 0m9.650s 00:11:08.614 user 0m12.588s 00:11:08.614 sys 0m4.333s 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 ************************************ 00:11:08.614 END TEST nvmf_bdevio 00:11:08.614 ************************************ 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:08.614 00:11:08.614 real 4m30.610s 00:11:08.614 user 10m34.843s 00:11:08.614 sys 1m30.421s 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 ************************************ 00:11:08.614 END TEST nvmf_target_core 00:11:08.614 ************************************ 00:11:08.614 18:06:01 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:08.614 18:06:01 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.614 18:06:01 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.614 18:06:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 ************************************ 00:11:08.614 START TEST nvmf_target_extra 00:11:08.614 ************************************ 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:08.614 * Looking for test storage... 00:11:08.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.614 ************************************ 00:11:08.614 START TEST nvmf_example 00:11:08.614 ************************************ 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:08.614 * Looking for test storage... 00:11:08.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.614 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:08.615 18:06:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:13.876 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:13.876 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.876 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:13.877 Found net devices under 0000:86:00.0: cvl_0_0 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:13.877 Found net devices under 0000:86:00.1: cvl_0_1 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:13.877 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.877 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:11:13.877 00:11:13.877 --- 10.0.0.2 ping statistics --- 00:11:13.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.877 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:13.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:11:13.877 00:11:13.877 --- 10.0.0.1 ping statistics --- 00:11:13.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.877 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:13.877 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3322072 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3322072 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 3322072 ']' 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:14.134 18:06:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.134 EAL: No free 2048 kB hugepages reported on node 1 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:15.063 18:06:07 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:15.063 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.024 Initializing NVMe Controllers 00:11:25.024 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:25.024 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:25.024 Initialization complete. Launching workers. 00:11:25.024 ======================================================== 00:11:25.024 Latency(us) 00:11:25.024 Device Information : IOPS MiB/s Average min max 00:11:25.024 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18356.20 71.70 3486.33 675.47 16427.95 00:11:25.024 ======================================================== 00:11:25.024 Total : 18356.20 71.70 3486.33 675.47 16427.95 00:11:25.024 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:25.282 rmmod nvme_tcp 00:11:25.282 rmmod nvme_fabrics 00:11:25.282 rmmod nvme_keyring 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3322072 ']' 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3322072 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 3322072 ']' 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 3322072 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3322072 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3322072' 00:11:25.282 killing process with pid 3322072 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 3322072 00:11:25.282 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 3322072 00:11:25.540 nvmf threads initialize successfully 00:11:25.540 bdev subsystem init successfully 00:11:25.540 created a nvmf target service 00:11:25.540 create targets's poll groups done 00:11:25.540 all subsystems of target started 00:11:25.540 nvmf target is running 00:11:25.540 all subsystems of target stopped 00:11:25.540 destroy targets's poll groups done 00:11:25.540 destroyed the nvmf target service 00:11:25.540 bdev subsystem finish successfully 00:11:25.540 nvmf threads destroy successfully 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.540 18:06:18 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.441 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:27.441 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:27.441 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.441 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.441 00:11:27.441 real 0m19.157s 00:11:27.441 user 0m45.583s 00:11:27.441 sys 0m5.544s 00:11:27.442 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:27.442 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:27.442 ************************************ 00:11:27.442 END TEST nvmf_example 00:11:27.442 ************************************ 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:27.702 ************************************ 00:11:27.702 START TEST nvmf_filesystem 00:11:27.702 ************************************ 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:27.702 * Looking for test storage... 00:11:27.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:27.702 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:27.703 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:27.703 #define SPDK_CONFIG_H 00:11:27.703 #define SPDK_CONFIG_APPS 1 00:11:27.703 #define SPDK_CONFIG_ARCH native 00:11:27.704 #undef SPDK_CONFIG_ASAN 00:11:27.704 #undef SPDK_CONFIG_AVAHI 00:11:27.704 #undef SPDK_CONFIG_CET 00:11:27.704 #define SPDK_CONFIG_COVERAGE 1 00:11:27.704 #define SPDK_CONFIG_CROSS_PREFIX 00:11:27.704 #undef SPDK_CONFIG_CRYPTO 00:11:27.704 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:27.704 #undef SPDK_CONFIG_CUSTOMOCF 00:11:27.704 #undef SPDK_CONFIG_DAOS 00:11:27.704 #define SPDK_CONFIG_DAOS_DIR 00:11:27.704 #define SPDK_CONFIG_DEBUG 1 00:11:27.704 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:27.704 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:27.704 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:27.704 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:27.704 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:27.704 #undef SPDK_CONFIG_DPDK_UADK 00:11:27.704 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:27.704 #define SPDK_CONFIG_EXAMPLES 1 00:11:27.704 #undef SPDK_CONFIG_FC 00:11:27.704 #define SPDK_CONFIG_FC_PATH 00:11:27.704 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:27.704 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:27.704 #undef SPDK_CONFIG_FUSE 00:11:27.704 #undef SPDK_CONFIG_FUZZER 00:11:27.704 #define SPDK_CONFIG_FUZZER_LIB 00:11:27.704 #undef SPDK_CONFIG_GOLANG 00:11:27.704 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:27.704 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:27.704 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:27.704 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:27.704 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:27.704 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:27.704 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:27.704 #define SPDK_CONFIG_IDXD 1 00:11:27.704 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:27.704 #undef SPDK_CONFIG_IPSEC_MB 00:11:27.704 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:27.704 #define SPDK_CONFIG_ISAL 1 00:11:27.704 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:27.704 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:27.704 #define SPDK_CONFIG_LIBDIR 00:11:27.704 #undef SPDK_CONFIG_LTO 00:11:27.704 #define SPDK_CONFIG_MAX_LCORES 128 00:11:27.704 #define SPDK_CONFIG_NVME_CUSE 1 00:11:27.704 #undef SPDK_CONFIG_OCF 00:11:27.704 #define SPDK_CONFIG_OCF_PATH 00:11:27.704 #define SPDK_CONFIG_OPENSSL_PATH 00:11:27.704 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:27.704 #define SPDK_CONFIG_PGO_DIR 00:11:27.704 #undef SPDK_CONFIG_PGO_USE 00:11:27.704 #define SPDK_CONFIG_PREFIX /usr/local 00:11:27.704 #undef SPDK_CONFIG_RAID5F 00:11:27.704 #undef SPDK_CONFIG_RBD 00:11:27.704 #define SPDK_CONFIG_RDMA 1 00:11:27.704 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:27.704 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:27.704 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:27.704 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:27.704 #define SPDK_CONFIG_SHARED 1 00:11:27.704 #undef SPDK_CONFIG_SMA 00:11:27.704 #define SPDK_CONFIG_TESTS 1 00:11:27.704 #undef SPDK_CONFIG_TSAN 00:11:27.704 #define SPDK_CONFIG_UBLK 1 00:11:27.704 #define SPDK_CONFIG_UBSAN 1 00:11:27.704 #undef SPDK_CONFIG_UNIT_TESTS 00:11:27.704 #undef SPDK_CONFIG_URING 00:11:27.704 #define SPDK_CONFIG_URING_PATH 00:11:27.704 #undef SPDK_CONFIG_URING_ZNS 00:11:27.704 #undef SPDK_CONFIG_USDT 00:11:27.704 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:27.704 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:27.704 #define SPDK_CONFIG_VFIO_USER 1 00:11:27.704 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:27.704 #define SPDK_CONFIG_VHOST 1 00:11:27.704 #define SPDK_CONFIG_VIRTIO 1 00:11:27.704 #undef SPDK_CONFIG_VTUNE 00:11:27.704 #define SPDK_CONFIG_VTUNE_DIR 00:11:27.704 #define SPDK_CONFIG_WERROR 1 00:11:27.704 #define SPDK_CONFIG_WPDK_DIR 00:11:27.704 #undef SPDK_CONFIG_XNVME 00:11:27.704 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:27.704 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:27.705 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 3324862 ]] 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 3324862 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.L25FU5 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.L25FU5/tests/target /tmp/spdk.L25FU5 00:11:27.706 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:27.707 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=1050284032 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4234145792 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=185084772352 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=195974307840 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=10889535488 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97924972544 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987153920 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:27.966 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=39171833856 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=39194861568 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23027712 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97984487424 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987153920 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=2666496 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=19597426688 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=19597430784 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:27.967 * Looking for test storage... 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=185084772352 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=13104128000 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:27.967 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:27.968 18:06:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:33.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.231 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:33.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:33.232 Found net devices under 0000:86:00.0: cvl_0_0 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:33.232 Found net devices under 0000:86:00.1: cvl_0_1 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:33.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:33.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:11:33.232 00:11:33.232 --- 10.0.0.2 ping statistics --- 00:11:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.232 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:33.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:33.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:11:33.232 00:11:33.232 --- 10.0.0.1 ping statistics --- 00:11:33.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:33.232 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:33.232 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.491 ************************************ 00:11:33.491 START TEST nvmf_filesystem_no_in_capsule 00:11:33.491 ************************************ 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3327874 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3327874 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3327874 ']' 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:33.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:33.491 18:06:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.491 [2024-07-24 18:06:26.435590] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:11:33.491 [2024-07-24 18:06:26.435636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:33.491 EAL: No free 2048 kB hugepages reported on node 1 00:11:33.491 [2024-07-24 18:06:26.494741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:33.749 [2024-07-24 18:06:26.575519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:33.749 [2024-07-24 18:06:26.575553] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:33.749 [2024-07-24 18:06:26.575560] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:33.749 [2024-07-24 18:06:26.575566] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:33.749 [2024-07-24 18:06:26.575572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:33.749 [2024-07-24 18:06:26.575607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:33.749 [2024-07-24 18:06:26.575727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:33.749 [2024-07-24 18:06:26.575791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:33.749 [2024-07-24 18:06:26.575792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.316 [2024-07-24 18:06:27.282933] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.316 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.574 Malloc1 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.574 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.575 [2024-07-24 18:06:27.427355] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:34.575 { 00:11:34.575 "name": "Malloc1", 00:11:34.575 "aliases": [ 00:11:34.575 "f6cc4dee-7153-41a1-af52-0b4d2dd1dde5" 00:11:34.575 ], 00:11:34.575 "product_name": "Malloc disk", 00:11:34.575 "block_size": 512, 00:11:34.575 "num_blocks": 1048576, 00:11:34.575 "uuid": "f6cc4dee-7153-41a1-af52-0b4d2dd1dde5", 00:11:34.575 "assigned_rate_limits": { 00:11:34.575 "rw_ios_per_sec": 0, 00:11:34.575 "rw_mbytes_per_sec": 0, 00:11:34.575 "r_mbytes_per_sec": 0, 00:11:34.575 "w_mbytes_per_sec": 0 00:11:34.575 }, 00:11:34.575 "claimed": true, 00:11:34.575 "claim_type": "exclusive_write", 00:11:34.575 "zoned": false, 00:11:34.575 "supported_io_types": { 00:11:34.575 "read": true, 00:11:34.575 "write": true, 00:11:34.575 "unmap": true, 00:11:34.575 "flush": true, 00:11:34.575 "reset": true, 00:11:34.575 "nvme_admin": false, 00:11:34.575 "nvme_io": false, 00:11:34.575 "nvme_io_md": false, 00:11:34.575 "write_zeroes": true, 00:11:34.575 "zcopy": true, 00:11:34.575 "get_zone_info": false, 00:11:34.575 "zone_management": false, 00:11:34.575 "zone_append": false, 00:11:34.575 "compare": false, 00:11:34.575 "compare_and_write": false, 00:11:34.575 "abort": true, 00:11:34.575 "seek_hole": false, 00:11:34.575 "seek_data": false, 00:11:34.575 "copy": true, 00:11:34.575 "nvme_iov_md": false 00:11:34.575 }, 00:11:34.575 "memory_domains": [ 00:11:34.575 { 00:11:34.575 "dma_device_id": "system", 00:11:34.575 "dma_device_type": 1 00:11:34.575 }, 00:11:34.575 { 00:11:34.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.575 "dma_device_type": 2 00:11:34.575 } 00:11:34.575 ], 00:11:34.575 "driver_specific": {} 00:11:34.575 } 00:11:34.575 ]' 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:34.575 18:06:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.949 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.949 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.949 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.949 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:35.949 18:06:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:37.927 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:37.928 18:06:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:38.493 18:06:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:39.866 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:39.866 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:39.866 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:39.866 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.866 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.866 ************************************ 00:11:39.866 START TEST filesystem_ext4 00:11:39.866 ************************************ 00:11:39.866 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:39.867 mke2fs 1.46.5 (30-Dec-2021) 00:11:39.867 Discarding device blocks: 0/522240 done 00:11:39.867 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:39.867 Filesystem UUID: 9798489c-12be-4505-9d85-f306257d5744 00:11:39.867 Superblock backups stored on blocks: 00:11:39.867 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:39.867 00:11:39.867 Allocating group tables: 0/64 done 00:11:39.867 Writing inode tables: 0/64 done 00:11:39.867 Creating journal (8192 blocks): done 00:11:39.867 Writing superblocks and filesystem accounting information: 0/64 done 00:11:39.867 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:39.867 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.125 18:06:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3327874 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.125 00:11:40.125 real 0m0.544s 00:11:40.125 user 0m0.016s 00:11:40.125 sys 0m0.072s 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:40.125 ************************************ 00:11:40.125 END TEST filesystem_ext4 00:11:40.125 ************************************ 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:40.125 ************************************ 00:11:40.125 START TEST filesystem_btrfs 00:11:40.125 ************************************ 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:40.125 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:40.692 btrfs-progs v6.6.2 00:11:40.692 See https://btrfs.readthedocs.io for more information. 00:11:40.692 00:11:40.692 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:40.692 NOTE: several default settings have changed in version 5.15, please make sure 00:11:40.692 this does not affect your deployments: 00:11:40.692 - DUP for metadata (-m dup) 00:11:40.692 - enabled no-holes (-O no-holes) 00:11:40.692 - enabled free-space-tree (-R free-space-tree) 00:11:40.692 00:11:40.692 Label: (null) 00:11:40.692 UUID: b1f2c701-05fb-4807-a1c2-96bcf90c6896 00:11:40.692 Node size: 16384 00:11:40.692 Sector size: 4096 00:11:40.692 Filesystem size: 510.00MiB 00:11:40.692 Block group profiles: 00:11:40.692 Data: single 8.00MiB 00:11:40.692 Metadata: DUP 32.00MiB 00:11:40.692 System: DUP 8.00MiB 00:11:40.692 SSD detected: yes 00:11:40.692 Zoned device: no 00:11:40.692 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:40.692 Runtime features: free-space-tree 00:11:40.692 Checksum: crc32c 00:11:40.692 Number of devices: 1 00:11:40.692 Devices: 00:11:40.692 ID SIZE PATH 00:11:40.692 1 510.00MiB /dev/nvme0n1p1 00:11:40.692 00:11:40.692 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:40.692 18:06:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3327874 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:41.257 00:11:41.257 real 0m1.165s 00:11:41.257 user 0m0.021s 00:11:41.257 sys 0m0.131s 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.257 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:41.257 ************************************ 00:11:41.257 END TEST filesystem_btrfs 00:11:41.257 ************************************ 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.515 ************************************ 00:11:41.515 START TEST filesystem_xfs 00:11:41.515 ************************************ 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.515 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:41.516 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:41.516 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:41.516 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:41.516 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:41.516 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:41.516 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:41.516 18:06:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:41.516 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:41.516 = sectsz=512 attr=2, projid32bit=1 00:11:41.516 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:41.516 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:41.516 data = bsize=4096 blocks=130560, imaxpct=25 00:11:41.516 = sunit=0 swidth=0 blks 00:11:41.516 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:41.516 log =internal log bsize=4096 blocks=16384, version=2 00:11:41.516 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:41.516 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:42.449 Discarding blocks...Done. 00:11:42.449 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:42.449 18:06:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:44.977 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:44.977 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:44.977 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:44.977 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3327874 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:44.978 00:11:44.978 real 0m3.547s 00:11:44.978 user 0m0.030s 00:11:44.978 sys 0m0.064s 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:44.978 ************************************ 00:11:44.978 END TEST filesystem_xfs 00:11:44.978 ************************************ 00:11:44.978 18:06:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:45.236 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:45.236 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:45.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3327874 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3327874 ']' 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3327874 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3327874 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3327874' 00:11:45.494 killing process with pid 3327874 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 3327874 00:11:45.494 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 3327874 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:46.060 00:11:46.060 real 0m12.461s 00:11:46.060 user 0m48.972s 00:11:46.060 sys 0m1.192s 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.060 ************************************ 00:11:46.060 END TEST nvmf_filesystem_no_in_capsule 00:11:46.060 ************************************ 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:46.060 ************************************ 00:11:46.060 START TEST nvmf_filesystem_in_capsule 00:11:46.060 ************************************ 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3330173 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3330173 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 3330173 ']' 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:46.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:46.060 18:06:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.060 [2024-07-24 18:06:38.970747] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:11:46.060 [2024-07-24 18:06:38.970787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:46.060 EAL: No free 2048 kB hugepages reported on node 1 00:11:46.060 [2024-07-24 18:06:39.027631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:46.060 [2024-07-24 18:06:39.108873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:46.060 [2024-07-24 18:06:39.108919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:46.060 [2024-07-24 18:06:39.108925] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:46.060 [2024-07-24 18:06:39.108931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:46.060 [2024-07-24 18:06:39.108936] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:46.060 [2024-07-24 18:06:39.108979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:46.060 [2024-07-24 18:06:39.109079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.060 [2024-07-24 18:06:39.109157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:46.060 [2024-07-24 18:06:39.109158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 [2024-07-24 18:06:39.822079] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 Malloc1 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.991 [2024-07-24 18:06:39.962518] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:46.991 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:46.992 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.992 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.992 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.992 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:46.992 { 00:11:46.992 "name": "Malloc1", 00:11:46.992 "aliases": [ 00:11:46.992 "e9aecd6e-53aa-4dbe-bad6-76c977d9d454" 00:11:46.992 ], 00:11:46.992 "product_name": "Malloc disk", 00:11:46.992 "block_size": 512, 00:11:46.992 "num_blocks": 1048576, 00:11:46.992 "uuid": "e9aecd6e-53aa-4dbe-bad6-76c977d9d454", 00:11:46.992 "assigned_rate_limits": { 00:11:46.992 "rw_ios_per_sec": 0, 00:11:46.992 "rw_mbytes_per_sec": 0, 00:11:46.992 "r_mbytes_per_sec": 0, 00:11:46.992 "w_mbytes_per_sec": 0 00:11:46.992 }, 00:11:46.992 "claimed": true, 00:11:46.992 "claim_type": "exclusive_write", 00:11:46.992 "zoned": false, 00:11:46.992 "supported_io_types": { 00:11:46.992 "read": true, 00:11:46.992 "write": true, 00:11:46.992 "unmap": true, 00:11:46.992 "flush": true, 00:11:46.992 "reset": true, 00:11:46.992 "nvme_admin": false, 00:11:46.992 "nvme_io": false, 00:11:46.992 "nvme_io_md": false, 00:11:46.992 "write_zeroes": true, 00:11:46.992 "zcopy": true, 00:11:46.992 "get_zone_info": false, 00:11:46.992 "zone_management": false, 00:11:46.992 "zone_append": false, 00:11:46.992 "compare": false, 00:11:46.992 "compare_and_write": false, 00:11:46.992 "abort": true, 00:11:46.992 "seek_hole": false, 00:11:46.992 "seek_data": false, 00:11:46.992 "copy": true, 00:11:46.992 "nvme_iov_md": false 00:11:46.992 }, 00:11:46.992 "memory_domains": [ 00:11:46.992 { 00:11:46.992 "dma_device_id": "system", 00:11:46.992 "dma_device_type": 1 00:11:46.992 }, 00:11:46.992 { 00:11:46.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.992 "dma_device_type": 2 00:11:46.992 } 00:11:46.992 ], 00:11:46.992 "driver_specific": {} 00:11:46.992 } 00:11:46.992 ]' 00:11:46.992 18:06:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:46.992 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:46.992 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:47.249 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:47.249 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:47.249 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:47.249 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:47.249 18:06:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:48.181 18:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:48.181 18:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:48.181 18:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:48.181 18:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:48.181 18:06:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:50.077 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:50.334 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:50.335 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:50.591 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:50.848 18:06:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.779 ************************************ 00:11:51.779 START TEST filesystem_in_capsule_ext4 00:11:51.779 ************************************ 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:51.779 18:06:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:51.779 mke2fs 1.46.5 (30-Dec-2021) 00:11:51.779 Discarding device blocks: 0/522240 done 00:11:52.037 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:52.037 Filesystem UUID: 17f65172-1609-429a-8166-712d4d615ea8 00:11:52.037 Superblock backups stored on blocks: 00:11:52.037 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:52.037 00:11:52.037 Allocating group tables: 0/64 done 00:11:52.037 Writing inode tables: 0/64 done 00:11:52.970 Creating journal (8192 blocks): done 00:11:52.970 Writing superblocks and filesystem accounting information: 0/64 done 00:11:52.970 00:11:52.970 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:52.970 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.970 18:06:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.970 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:52.970 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.970 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:52.970 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:52.970 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3330173 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.228 00:11:53.228 real 0m1.340s 00:11:53.228 user 0m0.032s 00:11:53.228 sys 0m0.059s 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:53.228 ************************************ 00:11:53.228 END TEST filesystem_in_capsule_ext4 00:11:53.228 ************************************ 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.228 ************************************ 00:11:53.228 START TEST filesystem_in_capsule_btrfs 00:11:53.228 ************************************ 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:53.228 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:53.486 btrfs-progs v6.6.2 00:11:53.486 See https://btrfs.readthedocs.io for more information. 00:11:53.486 00:11:53.486 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:53.486 NOTE: several default settings have changed in version 5.15, please make sure 00:11:53.486 this does not affect your deployments: 00:11:53.486 - DUP for metadata (-m dup) 00:11:53.486 - enabled no-holes (-O no-holes) 00:11:53.486 - enabled free-space-tree (-R free-space-tree) 00:11:53.486 00:11:53.486 Label: (null) 00:11:53.486 UUID: a07b8cac-b159-4e67-a389-bb60f17f43fe 00:11:53.486 Node size: 16384 00:11:53.486 Sector size: 4096 00:11:53.486 Filesystem size: 510.00MiB 00:11:53.486 Block group profiles: 00:11:53.486 Data: single 8.00MiB 00:11:53.486 Metadata: DUP 32.00MiB 00:11:53.486 System: DUP 8.00MiB 00:11:53.486 SSD detected: yes 00:11:53.486 Zoned device: no 00:11:53.486 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:53.486 Runtime features: free-space-tree 00:11:53.486 Checksum: crc32c 00:11:53.486 Number of devices: 1 00:11:53.486 Devices: 00:11:53.486 ID SIZE PATH 00:11:53.486 1 510.00MiB /dev/nvme0n1p1 00:11:53.486 00:11:53.486 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:53.486 18:06:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3330173 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:54.419 00:11:54.419 real 0m1.164s 00:11:54.419 user 0m0.026s 00:11:54.419 sys 0m0.128s 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:54.419 ************************************ 00:11:54.419 END TEST filesystem_in_capsule_btrfs 00:11:54.419 ************************************ 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.419 ************************************ 00:11:54.419 START TEST filesystem_in_capsule_xfs 00:11:54.419 ************************************ 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:54.419 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:54.420 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:54.420 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:54.420 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:54.420 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:54.420 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:54.420 18:06:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:54.420 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:54.420 = sectsz=512 attr=2, projid32bit=1 00:11:54.420 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:54.420 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:54.420 data = bsize=4096 blocks=130560, imaxpct=25 00:11:54.420 = sunit=0 swidth=0 blks 00:11:54.420 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:54.420 log =internal log bsize=4096 blocks=16384, version=2 00:11:54.420 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:54.420 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:55.792 Discarding blocks...Done. 00:11:55.792 18:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:55.792 18:06:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3330173 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:58.369 00:11:58.369 real 0m3.577s 00:11:58.369 user 0m0.017s 00:11:58.369 sys 0m0.079s 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:58.369 ************************************ 00:11:58.369 END TEST filesystem_in_capsule_xfs 00:11:58.369 ************************************ 00:11:58.369 18:06:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:58.369 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3330173 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 3330173 ']' 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 3330173 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3330173 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3330173' 00:11:58.369 killing process with pid 3330173 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 3330173 00:11:58.369 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 3330173 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.937 00:11:58.937 real 0m12.858s 00:11:58.937 user 0m50.501s 00:11:58.937 sys 0m1.242s 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.937 ************************************ 00:11:58.937 END TEST nvmf_filesystem_in_capsule 00:11:58.937 ************************************ 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:58.937 rmmod nvme_tcp 00:11:58.937 rmmod nvme_fabrics 00:11:58.937 rmmod nvme_keyring 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.937 18:06:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.470 18:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:01.470 00:12:01.470 real 0m33.366s 00:12:01.470 user 1m41.149s 00:12:01.470 sys 0m6.799s 00:12:01.470 18:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:01.470 18:06:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.470 ************************************ 00:12:01.471 END TEST nvmf_filesystem 00:12:01.471 ************************************ 00:12:01.471 18:06:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:01.471 18:06:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:01.471 18:06:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:01.471 18:06:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.471 ************************************ 00:12:01.471 START TEST nvmf_target_discovery 00:12:01.471 ************************************ 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:01.471 * Looking for test storage... 00:12:01.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:01.471 18:06:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:06.743 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:06.744 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:06.744 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:06.744 Found net devices under 0000:86:00.0: cvl_0_0 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:06.744 Found net devices under 0000:86:00.1: cvl_0_1 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:06.744 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:07.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:12:07.002 00:12:07.002 --- 10.0.0.2 ping statistics --- 00:12:07.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.002 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:12:07.002 00:12:07.002 --- 10.0.0.1 ping statistics --- 00:12:07.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.002 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3335970 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3335970 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 3335970 ']' 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:07.002 18:06:59 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 [2024-07-24 18:06:59.937584] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:12:07.002 [2024-07-24 18:06:59.937625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.002 EAL: No free 2048 kB hugepages reported on node 1 00:12:07.002 [2024-07-24 18:06:59.998796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:07.261 [2024-07-24 18:07:00.099015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.261 [2024-07-24 18:07:00.099052] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.261 [2024-07-24 18:07:00.099058] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.261 [2024-07-24 18:07:00.099064] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.261 [2024-07-24 18:07:00.099069] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.261 [2024-07-24 18:07:00.099157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.261 [2024-07-24 18:07:00.099274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:07.261 [2024-07-24 18:07:00.099363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.261 [2024-07-24 18:07:00.099365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 [2024-07-24 18:07:00.782819] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 Null1 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 [2024-07-24 18:07:00.828335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 Null2 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 Null3 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.827 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.828 Null4 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.828 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.086 18:07:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:08.086 00:12:08.086 Discovery Log Number of Records 6, Generation counter 6 00:12:08.086 =====Discovery Log Entry 0====== 00:12:08.086 trtype: tcp 00:12:08.086 adrfam: ipv4 00:12:08.086 subtype: current discovery subsystem 00:12:08.086 treq: not required 00:12:08.086 portid: 0 00:12:08.086 trsvcid: 4420 00:12:08.086 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:08.086 traddr: 10.0.0.2 00:12:08.086 eflags: explicit discovery connections, duplicate discovery information 00:12:08.086 sectype: none 00:12:08.086 =====Discovery Log Entry 1====== 00:12:08.086 trtype: tcp 00:12:08.086 adrfam: ipv4 00:12:08.086 subtype: nvme subsystem 00:12:08.086 treq: not required 00:12:08.086 portid: 0 00:12:08.086 trsvcid: 4420 00:12:08.086 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:08.086 traddr: 10.0.0.2 00:12:08.086 eflags: none 00:12:08.086 sectype: none 00:12:08.086 =====Discovery Log Entry 2====== 00:12:08.086 trtype: tcp 00:12:08.086 adrfam: ipv4 00:12:08.086 subtype: nvme subsystem 00:12:08.086 treq: not required 00:12:08.086 portid: 0 00:12:08.086 trsvcid: 4420 00:12:08.086 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:08.086 traddr: 10.0.0.2 00:12:08.086 eflags: none 00:12:08.086 sectype: none 00:12:08.086 =====Discovery Log Entry 3====== 00:12:08.086 trtype: tcp 00:12:08.086 adrfam: ipv4 00:12:08.086 subtype: nvme subsystem 00:12:08.086 treq: not required 00:12:08.086 portid: 0 00:12:08.086 trsvcid: 4420 00:12:08.086 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:08.086 traddr: 10.0.0.2 00:12:08.086 eflags: none 00:12:08.086 sectype: none 00:12:08.086 =====Discovery Log Entry 4====== 00:12:08.086 trtype: tcp 00:12:08.086 adrfam: ipv4 00:12:08.086 subtype: nvme subsystem 00:12:08.086 treq: not required 00:12:08.086 portid: 0 00:12:08.086 trsvcid: 4420 00:12:08.086 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:08.086 traddr: 10.0.0.2 00:12:08.086 eflags: none 00:12:08.086 sectype: none 00:12:08.086 =====Discovery Log Entry 5====== 00:12:08.086 trtype: tcp 00:12:08.086 adrfam: ipv4 00:12:08.086 subtype: discovery subsystem referral 00:12:08.086 treq: not required 00:12:08.086 portid: 0 00:12:08.086 trsvcid: 4430 00:12:08.086 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:08.086 traddr: 10.0.0.2 00:12:08.086 eflags: none 00:12:08.086 sectype: none 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:08.087 Perform nvmf subsystem discovery via RPC 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 [ 00:12:08.087 { 00:12:08.087 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:08.087 "subtype": "Discovery", 00:12:08.087 "listen_addresses": [ 00:12:08.087 { 00:12:08.087 "trtype": "TCP", 00:12:08.087 "adrfam": "IPv4", 00:12:08.087 "traddr": "10.0.0.2", 00:12:08.087 "trsvcid": "4420" 00:12:08.087 } 00:12:08.087 ], 00:12:08.087 "allow_any_host": true, 00:12:08.087 "hosts": [] 00:12:08.087 }, 00:12:08.087 { 00:12:08.087 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.087 "subtype": "NVMe", 00:12:08.087 "listen_addresses": [ 00:12:08.087 { 00:12:08.087 "trtype": "TCP", 00:12:08.087 "adrfam": "IPv4", 00:12:08.087 "traddr": "10.0.0.2", 00:12:08.087 "trsvcid": "4420" 00:12:08.087 } 00:12:08.087 ], 00:12:08.087 "allow_any_host": true, 00:12:08.087 "hosts": [], 00:12:08.087 "serial_number": "SPDK00000000000001", 00:12:08.087 "model_number": "SPDK bdev Controller", 00:12:08.087 "max_namespaces": 32, 00:12:08.087 "min_cntlid": 1, 00:12:08.087 "max_cntlid": 65519, 00:12:08.087 "namespaces": [ 00:12:08.087 { 00:12:08.087 "nsid": 1, 00:12:08.087 "bdev_name": "Null1", 00:12:08.087 "name": "Null1", 00:12:08.087 "nguid": "17418073E1BD488581B46AC1F2FF86B2", 00:12:08.087 "uuid": "17418073-e1bd-4885-81b4-6ac1f2ff86b2" 00:12:08.087 } 00:12:08.087 ] 00:12:08.087 }, 00:12:08.087 { 00:12:08.087 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:08.087 "subtype": "NVMe", 00:12:08.087 "listen_addresses": [ 00:12:08.087 { 00:12:08.087 "trtype": "TCP", 00:12:08.087 "adrfam": "IPv4", 00:12:08.087 "traddr": "10.0.0.2", 00:12:08.087 "trsvcid": "4420" 00:12:08.087 } 00:12:08.087 ], 00:12:08.087 "allow_any_host": true, 00:12:08.087 "hosts": [], 00:12:08.087 "serial_number": "SPDK00000000000002", 00:12:08.087 "model_number": "SPDK bdev Controller", 00:12:08.087 "max_namespaces": 32, 00:12:08.087 "min_cntlid": 1, 00:12:08.087 "max_cntlid": 65519, 00:12:08.087 "namespaces": [ 00:12:08.087 { 00:12:08.087 "nsid": 1, 00:12:08.087 "bdev_name": "Null2", 00:12:08.087 "name": "Null2", 00:12:08.087 "nguid": "8799AA2144004BC68BE462D35C930241", 00:12:08.087 "uuid": "8799aa21-4400-4bc6-8be4-62d35c930241" 00:12:08.087 } 00:12:08.087 ] 00:12:08.087 }, 00:12:08.087 { 00:12:08.087 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:08.087 "subtype": "NVMe", 00:12:08.087 "listen_addresses": [ 00:12:08.087 { 00:12:08.087 "trtype": "TCP", 00:12:08.087 "adrfam": "IPv4", 00:12:08.087 "traddr": "10.0.0.2", 00:12:08.087 "trsvcid": "4420" 00:12:08.087 } 00:12:08.087 ], 00:12:08.087 "allow_any_host": true, 00:12:08.087 "hosts": [], 00:12:08.087 "serial_number": "SPDK00000000000003", 00:12:08.087 "model_number": "SPDK bdev Controller", 00:12:08.087 "max_namespaces": 32, 00:12:08.087 "min_cntlid": 1, 00:12:08.087 "max_cntlid": 65519, 00:12:08.087 "namespaces": [ 00:12:08.087 { 00:12:08.087 "nsid": 1, 00:12:08.087 "bdev_name": "Null3", 00:12:08.087 "name": "Null3", 00:12:08.087 "nguid": "C962BC10592D4951B85B5A80676400AB", 00:12:08.087 "uuid": "c962bc10-592d-4951-b85b-5a80676400ab" 00:12:08.087 } 00:12:08.087 ] 00:12:08.087 }, 00:12:08.087 { 00:12:08.087 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:08.087 "subtype": "NVMe", 00:12:08.087 "listen_addresses": [ 00:12:08.087 { 00:12:08.087 "trtype": "TCP", 00:12:08.087 "adrfam": "IPv4", 00:12:08.087 "traddr": "10.0.0.2", 00:12:08.087 "trsvcid": "4420" 00:12:08.087 } 00:12:08.087 ], 00:12:08.087 "allow_any_host": true, 00:12:08.087 "hosts": [], 00:12:08.087 "serial_number": "SPDK00000000000004", 00:12:08.087 "model_number": "SPDK bdev Controller", 00:12:08.087 "max_namespaces": 32, 00:12:08.087 "min_cntlid": 1, 00:12:08.087 "max_cntlid": 65519, 00:12:08.087 "namespaces": [ 00:12:08.087 { 00:12:08.087 "nsid": 1, 00:12:08.087 "bdev_name": "Null4", 00:12:08.087 "name": "Null4", 00:12:08.087 "nguid": "96921CBC0FC840D4AED430B0849C43C9", 00:12:08.087 "uuid": "96921cbc-0fc8-40d4-aed4-30b0849c43c9" 00:12:08.087 } 00:12:08.087 ] 00:12:08.087 } 00:12:08.087 ] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.087 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:08.346 rmmod nvme_tcp 00:12:08.346 rmmod nvme_fabrics 00:12:08.346 rmmod nvme_keyring 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3335970 ']' 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3335970 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 3335970 ']' 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 3335970 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3335970 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3335970' 00:12:08.346 killing process with pid 3335970 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 3335970 00:12:08.346 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 3335970 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.605 18:07:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.509 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:10.509 00:12:10.509 real 0m9.524s 00:12:10.509 user 0m7.232s 00:12:10.509 sys 0m4.649s 00:12:10.509 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.509 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:10.509 ************************************ 00:12:10.509 END TEST nvmf_target_discovery 00:12:10.509 ************************************ 00:12:10.509 18:07:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:10.509 18:07:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:10.509 18:07:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:10.509 18:07:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:10.766 ************************************ 00:12:10.766 START TEST nvmf_referrals 00:12:10.766 ************************************ 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:10.766 * Looking for test storage... 00:12:10.766 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:10.766 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:10.767 18:07:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:16.034 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:16.034 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:16.034 Found net devices under 0000:86:00.0: cvl_0_0 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:16.034 Found net devices under 0000:86:00.1: cvl_0_1 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:16.034 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:16.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:12:16.035 00:12:16.035 --- 10.0.0.2 ping statistics --- 00:12:16.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.035 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:16.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:12:16.035 00:12:16.035 --- 10.0.0.1 ping statistics --- 00:12:16.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.035 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3339738 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3339738 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 3339738 ']' 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.035 18:07:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.035 [2024-07-24 18:07:09.017205] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:12:16.035 [2024-07-24 18:07:09.017246] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.035 EAL: No free 2048 kB hugepages reported on node 1 00:12:16.035 [2024-07-24 18:07:09.074286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.293 [2024-07-24 18:07:09.154953] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.293 [2024-07-24 18:07:09.154987] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.293 [2024-07-24 18:07:09.154994] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.293 [2024-07-24 18:07:09.155000] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.293 [2024-07-24 18:07:09.155009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.293 [2024-07-24 18:07:09.155047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.293 [2024-07-24 18:07:09.155144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.293 [2024-07-24 18:07:09.155162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.293 [2024-07-24 18:07:09.155163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 [2024-07-24 18:07:09.868891] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 [2024-07-24 18:07:09.882263] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.860 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.119 18:07:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.119 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.379 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.638 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:17.896 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.897 18:07:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:18.154 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:18.154 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:18.154 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:18.154 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:18.154 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.154 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:18.413 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:18.413 rmmod nvme_tcp 00:12:18.413 rmmod nvme_fabrics 00:12:18.413 rmmod nvme_keyring 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3339738 ']' 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3339738 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 3339738 ']' 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 3339738 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3339738 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3339738' 00:12:18.672 killing process with pid 3339738 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 3339738 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 3339738 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.672 18:07:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.206 00:12:21.206 real 0m10.202s 00:12:21.206 user 0m12.527s 00:12:21.206 sys 0m4.696s 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.206 ************************************ 00:12:21.206 END TEST nvmf_referrals 00:12:21.206 ************************************ 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.206 ************************************ 00:12:21.206 START TEST nvmf_connect_disconnect 00:12:21.206 ************************************ 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:21.206 * Looking for test storage... 00:12:21.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.206 18:07:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.547 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.548 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.548 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.548 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.548 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.548 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:12:26.808 00:12:26.808 --- 10.0.0.2 ping statistics --- 00:12:26.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.808 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:12:26.808 00:12:26.808 --- 10.0.0.1 ping statistics --- 00:12:26.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.808 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3343763 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3343763 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 3343763 ']' 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:26.808 18:07:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.808 [2024-07-24 18:07:19.758218] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:12:26.808 [2024-07-24 18:07:19.758261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.808 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.808 [2024-07-24 18:07:19.816724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.066 [2024-07-24 18:07:19.895809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.066 [2024-07-24 18:07:19.895844] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.066 [2024-07-24 18:07:19.895851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.066 [2024-07-24 18:07:19.895860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.066 [2024-07-24 18:07:19.895866] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.066 [2024-07-24 18:07:19.895907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.066 [2024-07-24 18:07:19.896010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.066 [2024-07-24 18:07:19.896105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.066 [2024-07-24 18:07:19.896106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.634 [2024-07-24 18:07:20.600939] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.634 [2024-07-24 18:07:20.652666] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:27.634 18:07:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:30.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.209 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.074 18:07:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.074 rmmod nvme_tcp 00:12:44.074 rmmod nvme_fabrics 00:12:44.074 rmmod nvme_keyring 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3343763 ']' 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3343763 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3343763 ']' 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 3343763 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3343763 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3343763' 00:12:44.074 killing process with pid 3343763 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 3343763 00:12:44.074 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 3343763 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.334 18:07:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.869 00:12:46.869 real 0m25.472s 00:12:46.869 user 1m10.802s 00:12:46.869 sys 0m5.441s 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.869 ************************************ 00:12:46.869 END TEST nvmf_connect_disconnect 00:12:46.869 ************************************ 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:46.869 ************************************ 00:12:46.869 START TEST nvmf_multitarget 00:12:46.869 ************************************ 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:46.869 * Looking for test storage... 00:12:46.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.869 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:46.870 18:07:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:52.140 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:52.141 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:52.141 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:52.141 Found net devices under 0000:86:00.0: cvl_0_0 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:52.141 Found net devices under 0000:86:00.1: cvl_0_1 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:52.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:12:52.141 00:12:52.141 --- 10.0.0.2 ping statistics --- 00:12:52.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.141 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:12:52.141 00:12:52.141 --- 10.0.0.1 ping statistics --- 00:12:52.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.141 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3349975 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3349975 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 3349975 ']' 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.141 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:52.142 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.142 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.142 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:52.142 18:07:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.142 [2024-07-24 18:07:44.515057] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:12:52.142 [2024-07-24 18:07:44.515099] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.142 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.142 [2024-07-24 18:07:44.572365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.142 [2024-07-24 18:07:44.652095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.142 [2024-07-24 18:07:44.652130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.142 [2024-07-24 18:07:44.652136] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.142 [2024-07-24 18:07:44.652142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.142 [2024-07-24 18:07:44.652147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.142 [2024-07-24 18:07:44.652190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.142 [2024-07-24 18:07:44.652289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.142 [2024-07-24 18:07:44.652385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.142 [2024-07-24 18:07:44.652385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:52.402 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:52.697 "nvmf_tgt_1" 00:12:52.697 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:52.697 "nvmf_tgt_2" 00:12:52.697 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.697 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:52.697 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:52.697 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:52.956 true 00:12:52.956 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:52.956 true 00:12:52.956 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:52.956 18:07:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:53.214 rmmod nvme_tcp 00:12:53.214 rmmod nvme_fabrics 00:12:53.214 rmmod nvme_keyring 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3349975 ']' 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3349975 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 3349975 ']' 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 3349975 00:12:53.214 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:53.215 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.215 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3349975 00:12:53.215 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.215 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.215 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3349975' 00:12:53.215 killing process with pid 3349975 00:12:53.215 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 3349975 00:12:53.215 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 3349975 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.473 18:07:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.375 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.375 00:12:55.375 real 0m9.020s 00:12:55.375 user 0m8.856s 00:12:55.375 sys 0m4.213s 00:12:55.375 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.375 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:55.375 ************************************ 00:12:55.375 END TEST nvmf_multitarget 00:12:55.375 ************************************ 00:12:55.633 18:07:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:55.633 18:07:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:55.633 18:07:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.633 18:07:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.634 ************************************ 00:12:55.634 START TEST nvmf_rpc 00:12:55.634 ************************************ 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:55.634 * Looking for test storage... 00:12:55.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.634 18:07:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:00.911 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:01.170 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:01.170 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.170 18:07:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:01.170 Found net devices under 0000:86:00.0: cvl_0_0 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:01.170 Found net devices under 0000:86:00.1: cvl_0_1 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.170 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.171 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:13:01.430 00:13:01.430 --- 10.0.0.2 ping statistics --- 00:13:01.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.430 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:13:01.430 00:13:01.430 --- 10.0.0.1 ping statistics --- 00:13:01.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.430 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3353754 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3353754 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 3353754 ']' 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:01.430 18:07:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.430 [2024-07-24 18:07:54.359699] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:13:01.430 [2024-07-24 18:07:54.359740] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.430 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.430 [2024-07-24 18:07:54.416725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.430 [2024-07-24 18:07:54.495992] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.430 [2024-07-24 18:07:54.496031] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.430 [2024-07-24 18:07:54.496037] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.430 [2024-07-24 18:07:54.496043] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.431 [2024-07-24 18:07:54.496048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.431 [2024-07-24 18:07:54.496084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.431 [2024-07-24 18:07:54.496182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.431 [2024-07-24 18:07:54.496288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.431 [2024-07-24 18:07:54.496289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.367 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:02.367 "tick_rate": 2100000000, 00:13:02.367 "poll_groups": [ 00:13:02.367 { 00:13:02.367 "name": "nvmf_tgt_poll_group_000", 00:13:02.367 "admin_qpairs": 0, 00:13:02.367 "io_qpairs": 0, 00:13:02.367 "current_admin_qpairs": 0, 00:13:02.367 "current_io_qpairs": 0, 00:13:02.367 "pending_bdev_io": 0, 00:13:02.367 "completed_nvme_io": 0, 00:13:02.367 "transports": [] 00:13:02.367 }, 00:13:02.367 { 00:13:02.367 "name": "nvmf_tgt_poll_group_001", 00:13:02.367 "admin_qpairs": 0, 00:13:02.367 "io_qpairs": 0, 00:13:02.367 "current_admin_qpairs": 0, 00:13:02.367 "current_io_qpairs": 0, 00:13:02.367 "pending_bdev_io": 0, 00:13:02.367 "completed_nvme_io": 0, 00:13:02.367 "transports": [] 00:13:02.367 }, 00:13:02.367 { 00:13:02.367 "name": "nvmf_tgt_poll_group_002", 00:13:02.367 "admin_qpairs": 0, 00:13:02.367 "io_qpairs": 0, 00:13:02.367 "current_admin_qpairs": 0, 00:13:02.367 "current_io_qpairs": 0, 00:13:02.367 "pending_bdev_io": 0, 00:13:02.367 "completed_nvme_io": 0, 00:13:02.367 "transports": [] 00:13:02.367 }, 00:13:02.367 { 00:13:02.367 "name": "nvmf_tgt_poll_group_003", 00:13:02.367 "admin_qpairs": 0, 00:13:02.367 "io_qpairs": 0, 00:13:02.367 "current_admin_qpairs": 0, 00:13:02.367 "current_io_qpairs": 0, 00:13:02.367 "pending_bdev_io": 0, 00:13:02.368 "completed_nvme_io": 0, 00:13:02.368 "transports": [] 00:13:02.368 } 00:13:02.368 ] 00:13:02.368 }' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.368 [2024-07-24 18:07:55.313136] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:02.368 "tick_rate": 2100000000, 00:13:02.368 "poll_groups": [ 00:13:02.368 { 00:13:02.368 "name": "nvmf_tgt_poll_group_000", 00:13:02.368 "admin_qpairs": 0, 00:13:02.368 "io_qpairs": 0, 00:13:02.368 "current_admin_qpairs": 0, 00:13:02.368 "current_io_qpairs": 0, 00:13:02.368 "pending_bdev_io": 0, 00:13:02.368 "completed_nvme_io": 0, 00:13:02.368 "transports": [ 00:13:02.368 { 00:13:02.368 "trtype": "TCP" 00:13:02.368 } 00:13:02.368 ] 00:13:02.368 }, 00:13:02.368 { 00:13:02.368 "name": "nvmf_tgt_poll_group_001", 00:13:02.368 "admin_qpairs": 0, 00:13:02.368 "io_qpairs": 0, 00:13:02.368 "current_admin_qpairs": 0, 00:13:02.368 "current_io_qpairs": 0, 00:13:02.368 "pending_bdev_io": 0, 00:13:02.368 "completed_nvme_io": 0, 00:13:02.368 "transports": [ 00:13:02.368 { 00:13:02.368 "trtype": "TCP" 00:13:02.368 } 00:13:02.368 ] 00:13:02.368 }, 00:13:02.368 { 00:13:02.368 "name": "nvmf_tgt_poll_group_002", 00:13:02.368 "admin_qpairs": 0, 00:13:02.368 "io_qpairs": 0, 00:13:02.368 "current_admin_qpairs": 0, 00:13:02.368 "current_io_qpairs": 0, 00:13:02.368 "pending_bdev_io": 0, 00:13:02.368 "completed_nvme_io": 0, 00:13:02.368 "transports": [ 00:13:02.368 { 00:13:02.368 "trtype": "TCP" 00:13:02.368 } 00:13:02.368 ] 00:13:02.368 }, 00:13:02.368 { 00:13:02.368 "name": "nvmf_tgt_poll_group_003", 00:13:02.368 "admin_qpairs": 0, 00:13:02.368 "io_qpairs": 0, 00:13:02.368 "current_admin_qpairs": 0, 00:13:02.368 "current_io_qpairs": 0, 00:13:02.368 "pending_bdev_io": 0, 00:13:02.368 "completed_nvme_io": 0, 00:13:02.368 "transports": [ 00:13:02.368 { 00:13:02.368 "trtype": "TCP" 00:13:02.368 } 00:13:02.368 ] 00:13:02.368 } 00:13:02.368 ] 00:13:02.368 }' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.368 Malloc1 00:13:02.368 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.628 [2024-07-24 18:07:55.484990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:02.628 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:02.629 [2024-07-24 18:07:55.509559] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:13:02.629 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:02.629 could not add new controller: failed to write to nvme-fabrics device 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.629 18:07:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.565 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.565 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.566 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.566 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:03.566 18:07:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.099 [2024-07-24 18:07:58.812844] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562' 00:13:06.099 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:06.099 could not add new controller: failed to write to nvme-fabrics device 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.099 18:07:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.035 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.035 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.035 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.035 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:07.035 18:08:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.570 [2024-07-24 18:08:02.165056] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.570 18:08:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.504 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.504 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.504 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.504 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.504 18:08:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.406 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.406 [2024-07-24 18:08:05.485740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.664 18:08:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.601 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.601 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:13.601 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.601 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:13.601 18:08:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.132 [2024-07-24 18:08:08.777522] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.132 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.133 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.133 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.133 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.133 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:16.133 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.133 18:08:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.142 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.142 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.142 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.142 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:17.142 18:08:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.045 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.045 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.045 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.045 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.045 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.045 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:19.045 18:08:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.045 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.305 [2024-07-24 18:08:12.146651] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.305 18:08:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.682 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.682 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.683 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.683 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:20.683 18:08:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.587 [2024-07-24 18:08:15.462027] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.587 18:08:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.524 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.524 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.524 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.524 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:23.524 18:08:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 [2024-07-24 18:08:18.726525] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.060 [2024-07-24 18:08:18.774629] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.060 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 [2024-07-24 18:08:18.826779] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 [2024-07-24 18:08:18.874956] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 [2024-07-24 18:08:18.923102] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.061 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:26.061 "tick_rate": 2100000000, 00:13:26.061 "poll_groups": [ 00:13:26.061 { 00:13:26.061 "name": "nvmf_tgt_poll_group_000", 00:13:26.061 "admin_qpairs": 2, 00:13:26.061 "io_qpairs": 168, 00:13:26.062 "current_admin_qpairs": 0, 00:13:26.062 "current_io_qpairs": 0, 00:13:26.062 "pending_bdev_io": 0, 00:13:26.062 "completed_nvme_io": 268, 00:13:26.062 "transports": [ 00:13:26.062 { 00:13:26.062 "trtype": "TCP" 00:13:26.062 } 00:13:26.062 ] 00:13:26.062 }, 00:13:26.062 { 00:13:26.062 "name": "nvmf_tgt_poll_group_001", 00:13:26.062 "admin_qpairs": 2, 00:13:26.062 "io_qpairs": 168, 00:13:26.062 "current_admin_qpairs": 0, 00:13:26.062 "current_io_qpairs": 0, 00:13:26.062 "pending_bdev_io": 0, 00:13:26.062 "completed_nvme_io": 295, 00:13:26.062 "transports": [ 00:13:26.062 { 00:13:26.062 "trtype": "TCP" 00:13:26.062 } 00:13:26.062 ] 00:13:26.062 }, 00:13:26.062 { 00:13:26.062 "name": "nvmf_tgt_poll_group_002", 00:13:26.062 "admin_qpairs": 1, 00:13:26.062 "io_qpairs": 168, 00:13:26.062 "current_admin_qpairs": 0, 00:13:26.062 "current_io_qpairs": 0, 00:13:26.062 "pending_bdev_io": 0, 00:13:26.062 "completed_nvme_io": 240, 00:13:26.062 "transports": [ 00:13:26.062 { 00:13:26.062 "trtype": "TCP" 00:13:26.062 } 00:13:26.062 ] 00:13:26.062 }, 00:13:26.062 { 00:13:26.062 "name": "nvmf_tgt_poll_group_003", 00:13:26.062 "admin_qpairs": 2, 00:13:26.062 "io_qpairs": 168, 00:13:26.062 "current_admin_qpairs": 0, 00:13:26.062 "current_io_qpairs": 0, 00:13:26.062 "pending_bdev_io": 0, 00:13:26.062 "completed_nvme_io": 219, 00:13:26.062 "transports": [ 00:13:26.062 { 00:13:26.062 "trtype": "TCP" 00:13:26.062 } 00:13:26.062 ] 00:13:26.062 } 00:13:26.062 ] 00:13:26.062 }' 00:13:26.062 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:26.062 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:26.062 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:26.062 18:08:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.062 rmmod nvme_tcp 00:13:26.062 rmmod nvme_fabrics 00:13:26.062 rmmod nvme_keyring 00:13:26.062 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3353754 ']' 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3353754 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 3353754 ']' 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 3353754 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3353754 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3353754' 00:13:26.321 killing process with pid 3353754 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 3353754 00:13:26.321 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 3353754 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.580 18:08:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.485 00:13:28.485 real 0m32.969s 00:13:28.485 user 1m41.057s 00:13:28.485 sys 0m5.936s 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.485 ************************************ 00:13:28.485 END TEST nvmf_rpc 00:13:28.485 ************************************ 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.485 ************************************ 00:13:28.485 START TEST nvmf_invalid 00:13:28.485 ************************************ 00:13:28.485 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:28.743 * Looking for test storage... 00:13:28.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:28.743 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.744 18:08:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:34.016 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:34.017 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:34.017 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:34.017 Found net devices under 0000:86:00.0: cvl_0_0 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:34.017 Found net devices under 0000:86:00.1: cvl_0_1 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:34.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:13:34.017 00:13:34.017 --- 10.0.0.2 ping statistics --- 00:13:34.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.017 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:13:34.017 00:13:34.017 --- 10.0.0.1 ping statistics --- 00:13:34.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.017 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3361354 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3361354 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 3361354 ']' 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:34.017 18:08:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.017 [2024-07-24 18:08:26.984621] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:13:34.017 [2024-07-24 18:08:26.984667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.017 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.017 [2024-07-24 18:08:27.045364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:34.276 [2024-07-24 18:08:27.127074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.276 [2024-07-24 18:08:27.127108] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.276 [2024-07-24 18:08:27.127115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.276 [2024-07-24 18:08:27.127121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.276 [2024-07-24 18:08:27.127126] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.276 [2024-07-24 18:08:27.127166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.276 [2024-07-24 18:08:27.127261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:34.276 [2024-07-24 18:08:27.127351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:34.276 [2024-07-24 18:08:27.127353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:34.844 18:08:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode30677 00:13:35.103 [2024-07-24 18:08:27.990365] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:35.103 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:35.103 { 00:13:35.103 "nqn": "nqn.2016-06.io.spdk:cnode30677", 00:13:35.103 "tgt_name": "foobar", 00:13:35.103 "method": "nvmf_create_subsystem", 00:13:35.103 "req_id": 1 00:13:35.103 } 00:13:35.103 Got JSON-RPC error response 00:13:35.103 response: 00:13:35.103 { 00:13:35.103 "code": -32603, 00:13:35.103 "message": "Unable to find target foobar" 00:13:35.103 }' 00:13:35.103 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:35.103 { 00:13:35.103 "nqn": "nqn.2016-06.io.spdk:cnode30677", 00:13:35.103 "tgt_name": "foobar", 00:13:35.103 "method": "nvmf_create_subsystem", 00:13:35.103 "req_id": 1 00:13:35.103 } 00:13:35.103 Got JSON-RPC error response 00:13:35.103 response: 00:13:35.103 { 00:13:35.103 "code": -32603, 00:13:35.103 "message": "Unable to find target foobar" 00:13:35.103 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:35.103 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:35.103 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3704 00:13:35.103 [2024-07-24 18:08:28.179044] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3704: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:35.363 { 00:13:35.363 "nqn": "nqn.2016-06.io.spdk:cnode3704", 00:13:35.363 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:35.363 "method": "nvmf_create_subsystem", 00:13:35.363 "req_id": 1 00:13:35.363 } 00:13:35.363 Got JSON-RPC error response 00:13:35.363 response: 00:13:35.363 { 00:13:35.363 "code": -32602, 00:13:35.363 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:35.363 }' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:35.363 { 00:13:35.363 "nqn": "nqn.2016-06.io.spdk:cnode3704", 00:13:35.363 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:35.363 "method": "nvmf_create_subsystem", 00:13:35.363 "req_id": 1 00:13:35.363 } 00:13:35.363 Got JSON-RPC error response 00:13:35.363 response: 00:13:35.363 { 00:13:35.363 "code": -32602, 00:13:35.363 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:35.363 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode19043 00:13:35.363 [2024-07-24 18:08:28.363611] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19043: invalid model number 'SPDK_Controller' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:35.363 { 00:13:35.363 "nqn": "nqn.2016-06.io.spdk:cnode19043", 00:13:35.363 "model_number": "SPDK_Controller\u001f", 00:13:35.363 "method": "nvmf_create_subsystem", 00:13:35.363 "req_id": 1 00:13:35.363 } 00:13:35.363 Got JSON-RPC error response 00:13:35.363 response: 00:13:35.363 { 00:13:35.363 "code": -32602, 00:13:35.363 "message": "Invalid MN SPDK_Controller\u001f" 00:13:35.363 }' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:35.363 { 00:13:35.363 "nqn": "nqn.2016-06.io.spdk:cnode19043", 00:13:35.363 "model_number": "SPDK_Controller\u001f", 00:13:35.363 "method": "nvmf_create_subsystem", 00:13:35.363 "req_id": 1 00:13:35.363 } 00:13:35.363 Got JSON-RPC error response 00:13:35.363 response: 00:13:35.363 { 00:13:35.363 "code": -32602, 00:13:35.363 "message": "Invalid MN SPDK_Controller\u001f" 00:13:35.363 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.363 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.622 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ N == \- ]] 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'NN?!~>Wy4>%W-pi`:cMIB' 00:13:35.623 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'NN?!~>Wy4>%W-pi`:cMIB' nqn.2016-06.io.spdk:cnode7470 00:13:35.623 [2024-07-24 18:08:28.696713] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7470: invalid serial number 'NN?!~>Wy4>%W-pi`:cMIB' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:35.883 { 00:13:35.883 "nqn": "nqn.2016-06.io.spdk:cnode7470", 00:13:35.883 "serial_number": "NN?!~>Wy4>%W-pi`:cMIB", 00:13:35.883 "method": "nvmf_create_subsystem", 00:13:35.883 "req_id": 1 00:13:35.883 } 00:13:35.883 Got JSON-RPC error response 00:13:35.883 response: 00:13:35.883 { 00:13:35.883 "code": -32602, 00:13:35.883 "message": "Invalid SN NN?!~>Wy4>%W-pi`:cMIB" 00:13:35.883 }' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:35.883 { 00:13:35.883 "nqn": "nqn.2016-06.io.spdk:cnode7470", 00:13:35.883 "serial_number": "NN?!~>Wy4>%W-pi`:cMIB", 00:13:35.883 "method": "nvmf_create_subsystem", 00:13:35.883 "req_id": 1 00:13:35.883 } 00:13:35.883 Got JSON-RPC error response 00:13:35.883 response: 00:13:35.883 { 00:13:35.883 "code": -32602, 00:13:35.883 "message": "Invalid SN NN?!~>Wy4>%W-pi`:cMIB" 00:13:35.883 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:35.883 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.884 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.885 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ H == \- ]] 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Haf1A`CWb~}[C?,AOM,{vIsD^7=|M6Et?lg=xqie;' 00:13:36.145 18:08:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'Haf1A`CWb~}[C?,AOM,{vIsD^7=|M6Et?lg=xqie;' nqn.2016-06.io.spdk:cnode22414 00:13:36.145 [2024-07-24 18:08:29.146231] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22414: invalid model number 'Haf1A`CWb~}[C?,AOM,{vIsD^7=|M6Et?lg=xqie;' 00:13:36.145 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:36.145 { 00:13:36.145 "nqn": "nqn.2016-06.io.spdk:cnode22414", 00:13:36.145 "model_number": "Haf1A`CWb~}[C?,AOM,{vIsD^7=|M6Et?lg=xqie;", 00:13:36.145 "method": "nvmf_create_subsystem", 00:13:36.145 "req_id": 1 00:13:36.145 } 00:13:36.145 Got JSON-RPC error response 00:13:36.145 response: 00:13:36.145 { 00:13:36.145 "code": -32602, 00:13:36.145 "message": "Invalid MN Haf1A`CWb~}[C?,AOM,{vIsD^7=|M6Et?lg=xqie;" 00:13:36.145 }' 00:13:36.145 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:36.145 { 00:13:36.145 "nqn": "nqn.2016-06.io.spdk:cnode22414", 00:13:36.145 "model_number": "Haf1A`CWb~}[C?,AOM,{vIsD^7=|M6Et?lg=xqie;", 00:13:36.145 "method": "nvmf_create_subsystem", 00:13:36.145 "req_id": 1 00:13:36.145 } 00:13:36.145 Got JSON-RPC error response 00:13:36.145 response: 00:13:36.145 { 00:13:36.145 "code": -32602, 00:13:36.145 "message": "Invalid MN Haf1A`CWb~}[C?,AOM,{vIsD^7=|M6Et?lg=xqie;" 00:13:36.145 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:36.145 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:36.405 [2024-07-24 18:08:29.326895] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.405 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:36.663 [2024-07-24 18:08:29.689317] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:36.663 { 00:13:36.663 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:36.663 "listen_address": { 00:13:36.663 "trtype": "tcp", 00:13:36.663 "traddr": "", 00:13:36.663 "trsvcid": "4421" 00:13:36.663 }, 00:13:36.663 "method": "nvmf_subsystem_remove_listener", 00:13:36.663 "req_id": 1 00:13:36.663 } 00:13:36.663 Got JSON-RPC error response 00:13:36.663 response: 00:13:36.663 { 00:13:36.663 "code": -32602, 00:13:36.663 "message": "Invalid parameters" 00:13:36.663 }' 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:36.663 { 00:13:36.663 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:36.663 "listen_address": { 00:13:36.663 "trtype": "tcp", 00:13:36.663 "traddr": "", 00:13:36.663 "trsvcid": "4421" 00:13:36.663 }, 00:13:36.663 "method": "nvmf_subsystem_remove_listener", 00:13:36.663 "req_id": 1 00:13:36.663 } 00:13:36.663 Got JSON-RPC error response 00:13:36.663 response: 00:13:36.663 { 00:13:36.663 "code": -32602, 00:13:36.663 "message": "Invalid parameters" 00:13:36.663 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:36.663 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9794 -i 0 00:13:36.922 [2024-07-24 18:08:29.869906] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9794: invalid cntlid range [0-65519] 00:13:36.922 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:36.922 { 00:13:36.922 "nqn": "nqn.2016-06.io.spdk:cnode9794", 00:13:36.922 "min_cntlid": 0, 00:13:36.922 "method": "nvmf_create_subsystem", 00:13:36.922 "req_id": 1 00:13:36.922 } 00:13:36.922 Got JSON-RPC error response 00:13:36.922 response: 00:13:36.922 { 00:13:36.922 "code": -32602, 00:13:36.922 "message": "Invalid cntlid range [0-65519]" 00:13:36.922 }' 00:13:36.922 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:36.922 { 00:13:36.922 "nqn": "nqn.2016-06.io.spdk:cnode9794", 00:13:36.922 "min_cntlid": 0, 00:13:36.922 "method": "nvmf_create_subsystem", 00:13:36.922 "req_id": 1 00:13:36.922 } 00:13:36.922 Got JSON-RPC error response 00:13:36.922 response: 00:13:36.922 { 00:13:36.922 "code": -32602, 00:13:36.922 "message": "Invalid cntlid range [0-65519]" 00:13:36.922 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:36.922 18:08:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25181 -i 65520 00:13:37.181 [2024-07-24 18:08:30.066574] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25181: invalid cntlid range [65520-65519] 00:13:37.181 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:37.181 { 00:13:37.181 "nqn": "nqn.2016-06.io.spdk:cnode25181", 00:13:37.181 "min_cntlid": 65520, 00:13:37.181 "method": "nvmf_create_subsystem", 00:13:37.181 "req_id": 1 00:13:37.181 } 00:13:37.181 Got JSON-RPC error response 00:13:37.181 response: 00:13:37.181 { 00:13:37.181 "code": -32602, 00:13:37.181 "message": "Invalid cntlid range [65520-65519]" 00:13:37.181 }' 00:13:37.181 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:37.181 { 00:13:37.181 "nqn": "nqn.2016-06.io.spdk:cnode25181", 00:13:37.181 "min_cntlid": 65520, 00:13:37.181 "method": "nvmf_create_subsystem", 00:13:37.181 "req_id": 1 00:13:37.181 } 00:13:37.181 Got JSON-RPC error response 00:13:37.181 response: 00:13:37.181 { 00:13:37.181 "code": -32602, 00:13:37.181 "message": "Invalid cntlid range [65520-65519]" 00:13:37.181 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.181 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7389 -I 0 00:13:37.181 [2024-07-24 18:08:30.255221] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7389: invalid cntlid range [1-0] 00:13:37.440 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:37.440 { 00:13:37.440 "nqn": "nqn.2016-06.io.spdk:cnode7389", 00:13:37.440 "max_cntlid": 0, 00:13:37.440 "method": "nvmf_create_subsystem", 00:13:37.440 "req_id": 1 00:13:37.440 } 00:13:37.440 Got JSON-RPC error response 00:13:37.440 response: 00:13:37.440 { 00:13:37.440 "code": -32602, 00:13:37.440 "message": "Invalid cntlid range [1-0]" 00:13:37.440 }' 00:13:37.440 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:37.440 { 00:13:37.440 "nqn": "nqn.2016-06.io.spdk:cnode7389", 00:13:37.440 "max_cntlid": 0, 00:13:37.440 "method": "nvmf_create_subsystem", 00:13:37.440 "req_id": 1 00:13:37.440 } 00:13:37.440 Got JSON-RPC error response 00:13:37.440 response: 00:13:37.440 { 00:13:37.440 "code": -32602, 00:13:37.440 "message": "Invalid cntlid range [1-0]" 00:13:37.440 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.440 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2458 -I 65520 00:13:37.440 [2024-07-24 18:08:30.427766] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2458: invalid cntlid range [1-65520] 00:13:37.440 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:37.440 { 00:13:37.440 "nqn": "nqn.2016-06.io.spdk:cnode2458", 00:13:37.440 "max_cntlid": 65520, 00:13:37.440 "method": "nvmf_create_subsystem", 00:13:37.440 "req_id": 1 00:13:37.440 } 00:13:37.440 Got JSON-RPC error response 00:13:37.440 response: 00:13:37.440 { 00:13:37.440 "code": -32602, 00:13:37.440 "message": "Invalid cntlid range [1-65520]" 00:13:37.440 }' 00:13:37.440 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:37.440 { 00:13:37.440 "nqn": "nqn.2016-06.io.spdk:cnode2458", 00:13:37.440 "max_cntlid": 65520, 00:13:37.440 "method": "nvmf_create_subsystem", 00:13:37.440 "req_id": 1 00:13:37.440 } 00:13:37.440 Got JSON-RPC error response 00:13:37.440 response: 00:13:37.440 { 00:13:37.440 "code": -32602, 00:13:37.440 "message": "Invalid cntlid range [1-65520]" 00:13:37.440 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.440 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17575 -i 6 -I 5 00:13:37.699 [2024-07-24 18:08:30.608352] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17575: invalid cntlid range [6-5] 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:37.699 { 00:13:37.699 "nqn": "nqn.2016-06.io.spdk:cnode17575", 00:13:37.699 "min_cntlid": 6, 00:13:37.699 "max_cntlid": 5, 00:13:37.699 "method": "nvmf_create_subsystem", 00:13:37.699 "req_id": 1 00:13:37.699 } 00:13:37.699 Got JSON-RPC error response 00:13:37.699 response: 00:13:37.699 { 00:13:37.699 "code": -32602, 00:13:37.699 "message": "Invalid cntlid range [6-5]" 00:13:37.699 }' 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:37.699 { 00:13:37.699 "nqn": "nqn.2016-06.io.spdk:cnode17575", 00:13:37.699 "min_cntlid": 6, 00:13:37.699 "max_cntlid": 5, 00:13:37.699 "method": "nvmf_create_subsystem", 00:13:37.699 "req_id": 1 00:13:37.699 } 00:13:37.699 Got JSON-RPC error response 00:13:37.699 response: 00:13:37.699 { 00:13:37.699 "code": -32602, 00:13:37.699 "message": "Invalid cntlid range [6-5]" 00:13:37.699 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:37.699 { 00:13:37.699 "name": "foobar", 00:13:37.699 "method": "nvmf_delete_target", 00:13:37.699 "req_id": 1 00:13:37.699 } 00:13:37.699 Got JSON-RPC error response 00:13:37.699 response: 00:13:37.699 { 00:13:37.699 "code": -32602, 00:13:37.699 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:37.699 }' 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:37.699 { 00:13:37.699 "name": "foobar", 00:13:37.699 "method": "nvmf_delete_target", 00:13:37.699 "req_id": 1 00:13:37.699 } 00:13:37.699 Got JSON-RPC error response 00:13:37.699 response: 00:13:37.699 { 00:13:37.699 "code": -32602, 00:13:37.699 "message": "The specified target doesn't exist, cannot delete it." 00:13:37.699 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:37.699 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:37.699 rmmod nvme_tcp 00:13:37.699 rmmod nvme_fabrics 00:13:37.964 rmmod nvme_keyring 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3361354 ']' 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3361354 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 3361354 ']' 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 3361354 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3361354 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3361354' 00:13:37.964 killing process with pid 3361354 00:13:37.964 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 3361354 00:13:37.965 18:08:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 3361354 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:37.965 18:08:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.569 00:13:40.569 real 0m11.556s 00:13:40.569 user 0m19.322s 00:13:40.569 sys 0m4.925s 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.569 ************************************ 00:13:40.569 END TEST nvmf_invalid 00:13:40.569 ************************************ 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.569 ************************************ 00:13:40.569 START TEST nvmf_connect_stress 00:13:40.569 ************************************ 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.569 * Looking for test storage... 00:13:40.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:40.569 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.570 18:08:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:45.841 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:45.841 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:45.841 Found net devices under 0000:86:00.0: cvl_0_0 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:45.841 Found net devices under 0000:86:00.1: cvl_0_1 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.841 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:13:45.842 00:13:45.842 --- 10.0.0.2 ping statistics --- 00:13:45.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.842 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:13:45.842 00:13:45.842 --- 10.0.0.1 ping statistics --- 00:13:45.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.842 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3365718 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3365718 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 3365718 ']' 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:45.842 18:08:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.842 [2024-07-24 18:08:38.675597] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:13:45.842 [2024-07-24 18:08:38.675639] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.842 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.842 [2024-07-24 18:08:38.732738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.842 [2024-07-24 18:08:38.811046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.842 [2024-07-24 18:08:38.811081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.842 [2024-07-24 18:08:38.811088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.842 [2024-07-24 18:08:38.811094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.842 [2024-07-24 18:08:38.811099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.842 [2024-07-24 18:08:38.811211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.842 [2024-07-24 18:08:38.811309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.842 [2024-07-24 18:08:38.811310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.409 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:46.409 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:46.409 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.409 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:46.409 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.668 [2024-07-24 18:08:39.514013] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.668 [2024-07-24 18:08:39.547165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.668 NULL1 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3365793 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:46.668 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.669 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.669 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.928 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.928 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:46.928 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.928 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.928 18:08:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.495 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.495 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:47.495 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.495 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.495 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.754 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.754 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:47.754 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.754 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.754 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.012 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.012 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:48.012 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.012 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.012 18:08:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.271 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.271 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:48.271 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.271 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.271 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.530 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.530 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:48.530 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.530 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.530 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.098 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.098 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:49.098 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.098 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.098 18:08:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.357 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.357 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:49.357 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.357 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.357 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.616 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.616 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:49.616 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.616 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.616 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.874 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.875 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:49.875 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.875 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.875 18:08:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.441 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.441 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:50.441 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.441 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.441 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.699 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.699 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:50.699 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.699 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.699 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.958 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.958 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:50.958 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.958 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.958 18:08:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.216 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.216 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:51.216 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.216 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.216 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.474 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.474 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:51.474 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.474 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.474 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.041 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.041 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:52.041 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.041 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.041 18:08:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.299 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.299 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:52.299 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.299 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.299 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.558 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.558 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:52.558 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.558 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.558 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.817 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.817 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:52.817 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.817 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.817 18:08:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.076 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.076 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:53.076 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.076 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.076 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.643 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.643 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:53.643 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.643 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.643 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.901 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.901 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:53.901 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.901 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.901 18:08:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.160 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.160 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:54.160 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.160 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.160 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.418 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.418 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:54.418 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.418 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.418 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.985 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.985 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:54.985 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.985 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.985 18:08:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.243 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.243 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:55.243 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.243 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.243 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.501 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.501 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:55.501 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.501 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.501 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.759 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.760 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:55.760 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.760 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.760 18:08:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.018 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.018 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:56.018 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.018 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.018 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.585 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.585 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:56.585 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.585 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.585 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.843 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.843 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3365793 00:13:56.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3365793) - No such process 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3365793 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.844 rmmod nvme_tcp 00:13:56.844 rmmod nvme_fabrics 00:13:56.844 rmmod nvme_keyring 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3365718 ']' 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3365718 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 3365718 ']' 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 3365718 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3365718 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3365718' 00:13:56.844 killing process with pid 3365718 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 3365718 00:13:56.844 18:08:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 3365718 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.103 18:08:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.636 00:13:59.636 real 0m18.937s 00:13:59.636 user 0m41.093s 00:13:59.636 sys 0m7.982s 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:59.636 ************************************ 00:13:59.636 END TEST nvmf_connect_stress 00:13:59.636 ************************************ 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:59.636 ************************************ 00:13:59.636 START TEST nvmf_fused_ordering 00:13:59.636 ************************************ 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.636 * Looking for test storage... 00:13:59.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.636 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.637 18:08:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:04.940 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:04.940 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:04.940 Found net devices under 0000:86:00.0: cvl_0_0 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:04.940 Found net devices under 0000:86:00.1: cvl_0_1 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.940 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:14:04.941 00:14:04.941 --- 10.0.0.2 ping statistics --- 00:14:04.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.941 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:14:04.941 00:14:04.941 --- 10.0.0.1 ping statistics --- 00:14:04.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.941 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3370899 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3370899 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 3370899 ']' 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.941 18:08:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.941 [2024-07-24 18:08:57.434818] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:14:04.941 [2024-07-24 18:08:57.434860] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.941 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.941 [2024-07-24 18:08:57.490512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.941 [2024-07-24 18:08:57.568232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.941 [2024-07-24 18:08:57.568265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.941 [2024-07-24 18:08:57.568272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.941 [2024-07-24 18:08:57.568278] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.941 [2024-07-24 18:08:57.568282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.941 [2024-07-24 18:08:57.568302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.200 [2024-07-24 18:08:58.270651] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.200 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.459 [2024-07-24 18:08:58.286816] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.459 NULL1 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.459 18:08:58 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:05.459 [2024-07-24 18:08:58.338864] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:14:05.459 [2024-07-24 18:08:58.338897] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371141 ] 00:14:05.459 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.717 Attached to nqn.2016-06.io.spdk:cnode1 00:14:05.717 Namespace ID: 1 size: 1GB 00:14:05.717 fused_ordering(0) 00:14:05.717 fused_ordering(1) 00:14:05.717 fused_ordering(2) 00:14:05.717 fused_ordering(3) 00:14:05.717 fused_ordering(4) 00:14:05.717 fused_ordering(5) 00:14:05.717 fused_ordering(6) 00:14:05.717 fused_ordering(7) 00:14:05.717 fused_ordering(8) 00:14:05.717 fused_ordering(9) 00:14:05.717 fused_ordering(10) 00:14:05.717 fused_ordering(11) 00:14:05.717 fused_ordering(12) 00:14:05.717 fused_ordering(13) 00:14:05.717 fused_ordering(14) 00:14:05.717 fused_ordering(15) 00:14:05.717 fused_ordering(16) 00:14:05.717 fused_ordering(17) 00:14:05.717 fused_ordering(18) 00:14:05.717 fused_ordering(19) 00:14:05.718 fused_ordering(20) 00:14:05.718 fused_ordering(21) 00:14:05.718 fused_ordering(22) 00:14:05.718 fused_ordering(23) 00:14:05.718 fused_ordering(24) 00:14:05.718 fused_ordering(25) 00:14:05.718 fused_ordering(26) 00:14:05.718 fused_ordering(27) 00:14:05.718 fused_ordering(28) 00:14:05.718 fused_ordering(29) 00:14:05.718 fused_ordering(30) 00:14:05.718 fused_ordering(31) 00:14:05.718 fused_ordering(32) 00:14:05.718 fused_ordering(33) 00:14:05.718 fused_ordering(34) 00:14:05.718 fused_ordering(35) 00:14:05.718 fused_ordering(36) 00:14:05.718 fused_ordering(37) 00:14:05.718 fused_ordering(38) 00:14:05.718 fused_ordering(39) 00:14:05.718 fused_ordering(40) 00:14:05.718 fused_ordering(41) 00:14:05.718 fused_ordering(42) 00:14:05.718 fused_ordering(43) 00:14:05.718 fused_ordering(44) 00:14:05.718 fused_ordering(45) 00:14:05.718 fused_ordering(46) 00:14:05.718 fused_ordering(47) 00:14:05.718 fused_ordering(48) 00:14:05.718 fused_ordering(49) 00:14:05.718 fused_ordering(50) 00:14:05.718 fused_ordering(51) 00:14:05.718 fused_ordering(52) 00:14:05.718 fused_ordering(53) 00:14:05.718 fused_ordering(54) 00:14:05.718 fused_ordering(55) 00:14:05.718 fused_ordering(56) 00:14:05.718 fused_ordering(57) 00:14:05.718 fused_ordering(58) 00:14:05.718 fused_ordering(59) 00:14:05.718 fused_ordering(60) 00:14:05.718 fused_ordering(61) 00:14:05.718 fused_ordering(62) 00:14:05.718 fused_ordering(63) 00:14:05.718 fused_ordering(64) 00:14:05.718 fused_ordering(65) 00:14:05.718 fused_ordering(66) 00:14:05.718 fused_ordering(67) 00:14:05.718 fused_ordering(68) 00:14:05.718 fused_ordering(69) 00:14:05.718 fused_ordering(70) 00:14:05.718 fused_ordering(71) 00:14:05.718 fused_ordering(72) 00:14:05.718 fused_ordering(73) 00:14:05.718 fused_ordering(74) 00:14:05.718 fused_ordering(75) 00:14:05.718 fused_ordering(76) 00:14:05.718 fused_ordering(77) 00:14:05.718 fused_ordering(78) 00:14:05.718 fused_ordering(79) 00:14:05.718 fused_ordering(80) 00:14:05.718 fused_ordering(81) 00:14:05.718 fused_ordering(82) 00:14:05.718 fused_ordering(83) 00:14:05.718 fused_ordering(84) 00:14:05.718 fused_ordering(85) 00:14:05.718 fused_ordering(86) 00:14:05.718 fused_ordering(87) 00:14:05.718 fused_ordering(88) 00:14:05.718 fused_ordering(89) 00:14:05.718 fused_ordering(90) 00:14:05.718 fused_ordering(91) 00:14:05.718 fused_ordering(92) 00:14:05.718 fused_ordering(93) 00:14:05.718 fused_ordering(94) 00:14:05.718 fused_ordering(95) 00:14:05.718 fused_ordering(96) 00:14:05.718 fused_ordering(97) 00:14:05.718 fused_ordering(98) 00:14:05.718 fused_ordering(99) 00:14:05.718 fused_ordering(100) 00:14:05.718 fused_ordering(101) 00:14:05.718 fused_ordering(102) 00:14:05.718 fused_ordering(103) 00:14:05.718 fused_ordering(104) 00:14:05.718 fused_ordering(105) 00:14:05.718 fused_ordering(106) 00:14:05.718 fused_ordering(107) 00:14:05.718 fused_ordering(108) 00:14:05.718 fused_ordering(109) 00:14:05.718 fused_ordering(110) 00:14:05.718 fused_ordering(111) 00:14:05.718 fused_ordering(112) 00:14:05.718 fused_ordering(113) 00:14:05.718 fused_ordering(114) 00:14:05.718 fused_ordering(115) 00:14:05.718 fused_ordering(116) 00:14:05.718 fused_ordering(117) 00:14:05.718 fused_ordering(118) 00:14:05.718 fused_ordering(119) 00:14:05.718 fused_ordering(120) 00:14:05.718 fused_ordering(121) 00:14:05.718 fused_ordering(122) 00:14:05.718 fused_ordering(123) 00:14:05.718 fused_ordering(124) 00:14:05.718 fused_ordering(125) 00:14:05.718 fused_ordering(126) 00:14:05.718 fused_ordering(127) 00:14:05.718 fused_ordering(128) 00:14:05.718 fused_ordering(129) 00:14:05.718 fused_ordering(130) 00:14:05.718 fused_ordering(131) 00:14:05.718 fused_ordering(132) 00:14:05.718 fused_ordering(133) 00:14:05.718 fused_ordering(134) 00:14:05.718 fused_ordering(135) 00:14:05.718 fused_ordering(136) 00:14:05.718 fused_ordering(137) 00:14:05.718 fused_ordering(138) 00:14:05.718 fused_ordering(139) 00:14:05.718 fused_ordering(140) 00:14:05.718 fused_ordering(141) 00:14:05.718 fused_ordering(142) 00:14:05.718 fused_ordering(143) 00:14:05.718 fused_ordering(144) 00:14:05.718 fused_ordering(145) 00:14:05.718 fused_ordering(146) 00:14:05.718 fused_ordering(147) 00:14:05.718 fused_ordering(148) 00:14:05.718 fused_ordering(149) 00:14:05.718 fused_ordering(150) 00:14:05.718 fused_ordering(151) 00:14:05.718 fused_ordering(152) 00:14:05.718 fused_ordering(153) 00:14:05.718 fused_ordering(154) 00:14:05.718 fused_ordering(155) 00:14:05.718 fused_ordering(156) 00:14:05.718 fused_ordering(157) 00:14:05.718 fused_ordering(158) 00:14:05.718 fused_ordering(159) 00:14:05.718 fused_ordering(160) 00:14:05.718 fused_ordering(161) 00:14:05.718 fused_ordering(162) 00:14:05.718 fused_ordering(163) 00:14:05.718 fused_ordering(164) 00:14:05.718 fused_ordering(165) 00:14:05.718 fused_ordering(166) 00:14:05.718 fused_ordering(167) 00:14:05.718 fused_ordering(168) 00:14:05.718 fused_ordering(169) 00:14:05.718 fused_ordering(170) 00:14:05.718 fused_ordering(171) 00:14:05.718 fused_ordering(172) 00:14:05.718 fused_ordering(173) 00:14:05.718 fused_ordering(174) 00:14:05.718 fused_ordering(175) 00:14:05.718 fused_ordering(176) 00:14:05.718 fused_ordering(177) 00:14:05.718 fused_ordering(178) 00:14:05.718 fused_ordering(179) 00:14:05.718 fused_ordering(180) 00:14:05.718 fused_ordering(181) 00:14:05.718 fused_ordering(182) 00:14:05.718 fused_ordering(183) 00:14:05.718 fused_ordering(184) 00:14:05.718 fused_ordering(185) 00:14:05.718 fused_ordering(186) 00:14:05.718 fused_ordering(187) 00:14:05.718 fused_ordering(188) 00:14:05.718 fused_ordering(189) 00:14:05.718 fused_ordering(190) 00:14:05.718 fused_ordering(191) 00:14:05.718 fused_ordering(192) 00:14:05.718 fused_ordering(193) 00:14:05.718 fused_ordering(194) 00:14:05.718 fused_ordering(195) 00:14:05.718 fused_ordering(196) 00:14:05.718 fused_ordering(197) 00:14:05.718 fused_ordering(198) 00:14:05.718 fused_ordering(199) 00:14:05.718 fused_ordering(200) 00:14:05.718 fused_ordering(201) 00:14:05.718 fused_ordering(202) 00:14:05.718 fused_ordering(203) 00:14:05.718 fused_ordering(204) 00:14:05.718 fused_ordering(205) 00:14:05.977 fused_ordering(206) 00:14:05.977 fused_ordering(207) 00:14:05.977 fused_ordering(208) 00:14:05.977 fused_ordering(209) 00:14:05.977 fused_ordering(210) 00:14:05.977 fused_ordering(211) 00:14:05.977 fused_ordering(212) 00:14:05.977 fused_ordering(213) 00:14:05.977 fused_ordering(214) 00:14:05.977 fused_ordering(215) 00:14:05.977 fused_ordering(216) 00:14:05.977 fused_ordering(217) 00:14:05.977 fused_ordering(218) 00:14:05.977 fused_ordering(219) 00:14:05.977 fused_ordering(220) 00:14:05.977 fused_ordering(221) 00:14:05.977 fused_ordering(222) 00:14:05.977 fused_ordering(223) 00:14:05.977 fused_ordering(224) 00:14:05.977 fused_ordering(225) 00:14:05.977 fused_ordering(226) 00:14:05.977 fused_ordering(227) 00:14:05.977 fused_ordering(228) 00:14:05.977 fused_ordering(229) 00:14:05.977 fused_ordering(230) 00:14:05.977 fused_ordering(231) 00:14:05.977 fused_ordering(232) 00:14:05.977 fused_ordering(233) 00:14:05.977 fused_ordering(234) 00:14:05.977 fused_ordering(235) 00:14:05.977 fused_ordering(236) 00:14:05.977 fused_ordering(237) 00:14:05.977 fused_ordering(238) 00:14:05.977 fused_ordering(239) 00:14:05.977 fused_ordering(240) 00:14:05.977 fused_ordering(241) 00:14:05.977 fused_ordering(242) 00:14:05.977 fused_ordering(243) 00:14:05.977 fused_ordering(244) 00:14:05.977 fused_ordering(245) 00:14:05.977 fused_ordering(246) 00:14:05.977 fused_ordering(247) 00:14:05.977 fused_ordering(248) 00:14:05.977 fused_ordering(249) 00:14:05.977 fused_ordering(250) 00:14:05.977 fused_ordering(251) 00:14:05.977 fused_ordering(252) 00:14:05.977 fused_ordering(253) 00:14:05.977 fused_ordering(254) 00:14:05.977 fused_ordering(255) 00:14:05.977 fused_ordering(256) 00:14:05.977 fused_ordering(257) 00:14:05.977 fused_ordering(258) 00:14:05.977 fused_ordering(259) 00:14:05.977 fused_ordering(260) 00:14:05.977 fused_ordering(261) 00:14:05.977 fused_ordering(262) 00:14:05.977 fused_ordering(263) 00:14:05.977 fused_ordering(264) 00:14:05.977 fused_ordering(265) 00:14:05.977 fused_ordering(266) 00:14:05.977 fused_ordering(267) 00:14:05.977 fused_ordering(268) 00:14:05.977 fused_ordering(269) 00:14:05.977 fused_ordering(270) 00:14:05.977 fused_ordering(271) 00:14:05.977 fused_ordering(272) 00:14:05.977 fused_ordering(273) 00:14:05.977 fused_ordering(274) 00:14:05.977 fused_ordering(275) 00:14:05.977 fused_ordering(276) 00:14:05.977 fused_ordering(277) 00:14:05.977 fused_ordering(278) 00:14:05.977 fused_ordering(279) 00:14:05.977 fused_ordering(280) 00:14:05.977 fused_ordering(281) 00:14:05.977 fused_ordering(282) 00:14:05.977 fused_ordering(283) 00:14:05.977 fused_ordering(284) 00:14:05.977 fused_ordering(285) 00:14:05.977 fused_ordering(286) 00:14:05.977 fused_ordering(287) 00:14:05.977 fused_ordering(288) 00:14:05.977 fused_ordering(289) 00:14:05.977 fused_ordering(290) 00:14:05.977 fused_ordering(291) 00:14:05.977 fused_ordering(292) 00:14:05.977 fused_ordering(293) 00:14:05.977 fused_ordering(294) 00:14:05.977 fused_ordering(295) 00:14:05.977 fused_ordering(296) 00:14:05.977 fused_ordering(297) 00:14:05.977 fused_ordering(298) 00:14:05.977 fused_ordering(299) 00:14:05.977 fused_ordering(300) 00:14:05.977 fused_ordering(301) 00:14:05.977 fused_ordering(302) 00:14:05.977 fused_ordering(303) 00:14:05.977 fused_ordering(304) 00:14:05.977 fused_ordering(305) 00:14:05.977 fused_ordering(306) 00:14:05.977 fused_ordering(307) 00:14:05.977 fused_ordering(308) 00:14:05.977 fused_ordering(309) 00:14:05.977 fused_ordering(310) 00:14:05.977 fused_ordering(311) 00:14:05.977 fused_ordering(312) 00:14:05.977 fused_ordering(313) 00:14:05.977 fused_ordering(314) 00:14:05.977 fused_ordering(315) 00:14:05.977 fused_ordering(316) 00:14:05.977 fused_ordering(317) 00:14:05.977 fused_ordering(318) 00:14:05.977 fused_ordering(319) 00:14:05.977 fused_ordering(320) 00:14:05.977 fused_ordering(321) 00:14:05.977 fused_ordering(322) 00:14:05.977 fused_ordering(323) 00:14:05.977 fused_ordering(324) 00:14:05.977 fused_ordering(325) 00:14:05.977 fused_ordering(326) 00:14:05.977 fused_ordering(327) 00:14:05.977 fused_ordering(328) 00:14:05.977 fused_ordering(329) 00:14:05.977 fused_ordering(330) 00:14:05.977 fused_ordering(331) 00:14:05.977 fused_ordering(332) 00:14:05.977 fused_ordering(333) 00:14:05.977 fused_ordering(334) 00:14:05.977 fused_ordering(335) 00:14:05.977 fused_ordering(336) 00:14:05.977 fused_ordering(337) 00:14:05.977 fused_ordering(338) 00:14:05.977 fused_ordering(339) 00:14:05.977 fused_ordering(340) 00:14:05.977 fused_ordering(341) 00:14:05.977 fused_ordering(342) 00:14:05.977 fused_ordering(343) 00:14:05.977 fused_ordering(344) 00:14:05.977 fused_ordering(345) 00:14:05.977 fused_ordering(346) 00:14:05.977 fused_ordering(347) 00:14:05.977 fused_ordering(348) 00:14:05.977 fused_ordering(349) 00:14:05.977 fused_ordering(350) 00:14:05.977 fused_ordering(351) 00:14:05.977 fused_ordering(352) 00:14:05.977 fused_ordering(353) 00:14:05.977 fused_ordering(354) 00:14:05.977 fused_ordering(355) 00:14:05.977 fused_ordering(356) 00:14:05.977 fused_ordering(357) 00:14:05.977 fused_ordering(358) 00:14:05.977 fused_ordering(359) 00:14:05.977 fused_ordering(360) 00:14:05.977 fused_ordering(361) 00:14:05.977 fused_ordering(362) 00:14:05.977 fused_ordering(363) 00:14:05.977 fused_ordering(364) 00:14:05.977 fused_ordering(365) 00:14:05.977 fused_ordering(366) 00:14:05.977 fused_ordering(367) 00:14:05.977 fused_ordering(368) 00:14:05.977 fused_ordering(369) 00:14:05.977 fused_ordering(370) 00:14:05.977 fused_ordering(371) 00:14:05.977 fused_ordering(372) 00:14:05.977 fused_ordering(373) 00:14:05.977 fused_ordering(374) 00:14:05.977 fused_ordering(375) 00:14:05.977 fused_ordering(376) 00:14:05.977 fused_ordering(377) 00:14:05.977 fused_ordering(378) 00:14:05.977 fused_ordering(379) 00:14:05.977 fused_ordering(380) 00:14:05.977 fused_ordering(381) 00:14:05.977 fused_ordering(382) 00:14:05.977 fused_ordering(383) 00:14:05.977 fused_ordering(384) 00:14:05.977 fused_ordering(385) 00:14:05.977 fused_ordering(386) 00:14:05.977 fused_ordering(387) 00:14:05.977 fused_ordering(388) 00:14:05.977 fused_ordering(389) 00:14:05.977 fused_ordering(390) 00:14:05.977 fused_ordering(391) 00:14:05.977 fused_ordering(392) 00:14:05.977 fused_ordering(393) 00:14:05.977 fused_ordering(394) 00:14:05.977 fused_ordering(395) 00:14:05.977 fused_ordering(396) 00:14:05.977 fused_ordering(397) 00:14:05.977 fused_ordering(398) 00:14:05.977 fused_ordering(399) 00:14:05.977 fused_ordering(400) 00:14:05.977 fused_ordering(401) 00:14:05.977 fused_ordering(402) 00:14:05.977 fused_ordering(403) 00:14:05.977 fused_ordering(404) 00:14:05.978 fused_ordering(405) 00:14:05.978 fused_ordering(406) 00:14:05.978 fused_ordering(407) 00:14:05.978 fused_ordering(408) 00:14:05.978 fused_ordering(409) 00:14:05.978 fused_ordering(410) 00:14:06.236 fused_ordering(411) 00:14:06.236 fused_ordering(412) 00:14:06.236 fused_ordering(413) 00:14:06.236 fused_ordering(414) 00:14:06.236 fused_ordering(415) 00:14:06.236 fused_ordering(416) 00:14:06.236 fused_ordering(417) 00:14:06.236 fused_ordering(418) 00:14:06.236 fused_ordering(419) 00:14:06.236 fused_ordering(420) 00:14:06.236 fused_ordering(421) 00:14:06.236 fused_ordering(422) 00:14:06.236 fused_ordering(423) 00:14:06.236 fused_ordering(424) 00:14:06.236 fused_ordering(425) 00:14:06.236 fused_ordering(426) 00:14:06.236 fused_ordering(427) 00:14:06.236 fused_ordering(428) 00:14:06.236 fused_ordering(429) 00:14:06.236 fused_ordering(430) 00:14:06.236 fused_ordering(431) 00:14:06.236 fused_ordering(432) 00:14:06.236 fused_ordering(433) 00:14:06.236 fused_ordering(434) 00:14:06.236 fused_ordering(435) 00:14:06.236 fused_ordering(436) 00:14:06.236 fused_ordering(437) 00:14:06.236 fused_ordering(438) 00:14:06.236 fused_ordering(439) 00:14:06.236 fused_ordering(440) 00:14:06.236 fused_ordering(441) 00:14:06.236 fused_ordering(442) 00:14:06.236 fused_ordering(443) 00:14:06.236 fused_ordering(444) 00:14:06.236 fused_ordering(445) 00:14:06.236 fused_ordering(446) 00:14:06.236 fused_ordering(447) 00:14:06.236 fused_ordering(448) 00:14:06.236 fused_ordering(449) 00:14:06.236 fused_ordering(450) 00:14:06.236 fused_ordering(451) 00:14:06.236 fused_ordering(452) 00:14:06.236 fused_ordering(453) 00:14:06.236 fused_ordering(454) 00:14:06.236 fused_ordering(455) 00:14:06.236 fused_ordering(456) 00:14:06.236 fused_ordering(457) 00:14:06.236 fused_ordering(458) 00:14:06.236 fused_ordering(459) 00:14:06.236 fused_ordering(460) 00:14:06.236 fused_ordering(461) 00:14:06.236 fused_ordering(462) 00:14:06.236 fused_ordering(463) 00:14:06.236 fused_ordering(464) 00:14:06.236 fused_ordering(465) 00:14:06.236 fused_ordering(466) 00:14:06.236 fused_ordering(467) 00:14:06.236 fused_ordering(468) 00:14:06.236 fused_ordering(469) 00:14:06.236 fused_ordering(470) 00:14:06.236 fused_ordering(471) 00:14:06.236 fused_ordering(472) 00:14:06.236 fused_ordering(473) 00:14:06.236 fused_ordering(474) 00:14:06.236 fused_ordering(475) 00:14:06.236 fused_ordering(476) 00:14:06.236 fused_ordering(477) 00:14:06.236 fused_ordering(478) 00:14:06.236 fused_ordering(479) 00:14:06.236 fused_ordering(480) 00:14:06.236 fused_ordering(481) 00:14:06.236 fused_ordering(482) 00:14:06.236 fused_ordering(483) 00:14:06.236 fused_ordering(484) 00:14:06.236 fused_ordering(485) 00:14:06.236 fused_ordering(486) 00:14:06.236 fused_ordering(487) 00:14:06.236 fused_ordering(488) 00:14:06.236 fused_ordering(489) 00:14:06.236 fused_ordering(490) 00:14:06.236 fused_ordering(491) 00:14:06.236 fused_ordering(492) 00:14:06.236 fused_ordering(493) 00:14:06.236 fused_ordering(494) 00:14:06.236 fused_ordering(495) 00:14:06.236 fused_ordering(496) 00:14:06.236 fused_ordering(497) 00:14:06.236 fused_ordering(498) 00:14:06.236 fused_ordering(499) 00:14:06.236 fused_ordering(500) 00:14:06.236 fused_ordering(501) 00:14:06.236 fused_ordering(502) 00:14:06.236 fused_ordering(503) 00:14:06.236 fused_ordering(504) 00:14:06.236 fused_ordering(505) 00:14:06.236 fused_ordering(506) 00:14:06.236 fused_ordering(507) 00:14:06.236 fused_ordering(508) 00:14:06.236 fused_ordering(509) 00:14:06.236 fused_ordering(510) 00:14:06.236 fused_ordering(511) 00:14:06.236 fused_ordering(512) 00:14:06.236 fused_ordering(513) 00:14:06.236 fused_ordering(514) 00:14:06.236 fused_ordering(515) 00:14:06.236 fused_ordering(516) 00:14:06.236 fused_ordering(517) 00:14:06.236 fused_ordering(518) 00:14:06.236 fused_ordering(519) 00:14:06.236 fused_ordering(520) 00:14:06.236 fused_ordering(521) 00:14:06.236 fused_ordering(522) 00:14:06.236 fused_ordering(523) 00:14:06.236 fused_ordering(524) 00:14:06.236 fused_ordering(525) 00:14:06.236 fused_ordering(526) 00:14:06.236 fused_ordering(527) 00:14:06.236 fused_ordering(528) 00:14:06.236 fused_ordering(529) 00:14:06.236 fused_ordering(530) 00:14:06.236 fused_ordering(531) 00:14:06.236 fused_ordering(532) 00:14:06.236 fused_ordering(533) 00:14:06.236 fused_ordering(534) 00:14:06.236 fused_ordering(535) 00:14:06.236 fused_ordering(536) 00:14:06.236 fused_ordering(537) 00:14:06.236 fused_ordering(538) 00:14:06.237 fused_ordering(539) 00:14:06.237 fused_ordering(540) 00:14:06.237 fused_ordering(541) 00:14:06.237 fused_ordering(542) 00:14:06.237 fused_ordering(543) 00:14:06.237 fused_ordering(544) 00:14:06.237 fused_ordering(545) 00:14:06.237 fused_ordering(546) 00:14:06.237 fused_ordering(547) 00:14:06.237 fused_ordering(548) 00:14:06.237 fused_ordering(549) 00:14:06.237 fused_ordering(550) 00:14:06.237 fused_ordering(551) 00:14:06.237 fused_ordering(552) 00:14:06.237 fused_ordering(553) 00:14:06.237 fused_ordering(554) 00:14:06.237 fused_ordering(555) 00:14:06.237 fused_ordering(556) 00:14:06.237 fused_ordering(557) 00:14:06.237 fused_ordering(558) 00:14:06.237 fused_ordering(559) 00:14:06.237 fused_ordering(560) 00:14:06.237 fused_ordering(561) 00:14:06.237 fused_ordering(562) 00:14:06.237 fused_ordering(563) 00:14:06.237 fused_ordering(564) 00:14:06.237 fused_ordering(565) 00:14:06.237 fused_ordering(566) 00:14:06.237 fused_ordering(567) 00:14:06.237 fused_ordering(568) 00:14:06.237 fused_ordering(569) 00:14:06.237 fused_ordering(570) 00:14:06.237 fused_ordering(571) 00:14:06.237 fused_ordering(572) 00:14:06.237 fused_ordering(573) 00:14:06.237 fused_ordering(574) 00:14:06.237 fused_ordering(575) 00:14:06.237 fused_ordering(576) 00:14:06.237 fused_ordering(577) 00:14:06.237 fused_ordering(578) 00:14:06.237 fused_ordering(579) 00:14:06.237 fused_ordering(580) 00:14:06.237 fused_ordering(581) 00:14:06.237 fused_ordering(582) 00:14:06.237 fused_ordering(583) 00:14:06.237 fused_ordering(584) 00:14:06.237 fused_ordering(585) 00:14:06.237 fused_ordering(586) 00:14:06.237 fused_ordering(587) 00:14:06.237 fused_ordering(588) 00:14:06.237 fused_ordering(589) 00:14:06.237 fused_ordering(590) 00:14:06.237 fused_ordering(591) 00:14:06.237 fused_ordering(592) 00:14:06.237 fused_ordering(593) 00:14:06.237 fused_ordering(594) 00:14:06.237 fused_ordering(595) 00:14:06.237 fused_ordering(596) 00:14:06.237 fused_ordering(597) 00:14:06.237 fused_ordering(598) 00:14:06.237 fused_ordering(599) 00:14:06.237 fused_ordering(600) 00:14:06.237 fused_ordering(601) 00:14:06.237 fused_ordering(602) 00:14:06.237 fused_ordering(603) 00:14:06.237 fused_ordering(604) 00:14:06.237 fused_ordering(605) 00:14:06.237 fused_ordering(606) 00:14:06.237 fused_ordering(607) 00:14:06.237 fused_ordering(608) 00:14:06.237 fused_ordering(609) 00:14:06.237 fused_ordering(610) 00:14:06.237 fused_ordering(611) 00:14:06.237 fused_ordering(612) 00:14:06.237 fused_ordering(613) 00:14:06.237 fused_ordering(614) 00:14:06.237 fused_ordering(615) 00:14:06.804 fused_ordering(616) 00:14:06.804 fused_ordering(617) 00:14:06.804 fused_ordering(618) 00:14:06.804 fused_ordering(619) 00:14:06.804 fused_ordering(620) 00:14:06.804 fused_ordering(621) 00:14:06.804 fused_ordering(622) 00:14:06.804 fused_ordering(623) 00:14:06.804 fused_ordering(624) 00:14:06.804 fused_ordering(625) 00:14:06.804 fused_ordering(626) 00:14:06.804 fused_ordering(627) 00:14:06.804 fused_ordering(628) 00:14:06.804 fused_ordering(629) 00:14:06.804 fused_ordering(630) 00:14:06.804 fused_ordering(631) 00:14:06.804 fused_ordering(632) 00:14:06.804 fused_ordering(633) 00:14:06.804 fused_ordering(634) 00:14:06.804 fused_ordering(635) 00:14:06.804 fused_ordering(636) 00:14:06.804 fused_ordering(637) 00:14:06.804 fused_ordering(638) 00:14:06.804 fused_ordering(639) 00:14:06.804 fused_ordering(640) 00:14:06.804 fused_ordering(641) 00:14:06.804 fused_ordering(642) 00:14:06.804 fused_ordering(643) 00:14:06.804 fused_ordering(644) 00:14:06.804 fused_ordering(645) 00:14:06.804 fused_ordering(646) 00:14:06.804 fused_ordering(647) 00:14:06.804 fused_ordering(648) 00:14:06.804 fused_ordering(649) 00:14:06.804 fused_ordering(650) 00:14:06.804 fused_ordering(651) 00:14:06.804 fused_ordering(652) 00:14:06.804 fused_ordering(653) 00:14:06.804 fused_ordering(654) 00:14:06.804 fused_ordering(655) 00:14:06.804 fused_ordering(656) 00:14:06.804 fused_ordering(657) 00:14:06.804 fused_ordering(658) 00:14:06.804 fused_ordering(659) 00:14:06.804 fused_ordering(660) 00:14:06.804 fused_ordering(661) 00:14:06.804 fused_ordering(662) 00:14:06.804 fused_ordering(663) 00:14:06.804 fused_ordering(664) 00:14:06.804 fused_ordering(665) 00:14:06.804 fused_ordering(666) 00:14:06.804 fused_ordering(667) 00:14:06.804 fused_ordering(668) 00:14:06.804 fused_ordering(669) 00:14:06.804 fused_ordering(670) 00:14:06.804 fused_ordering(671) 00:14:06.804 fused_ordering(672) 00:14:06.804 fused_ordering(673) 00:14:06.804 fused_ordering(674) 00:14:06.804 fused_ordering(675) 00:14:06.804 fused_ordering(676) 00:14:06.804 fused_ordering(677) 00:14:06.804 fused_ordering(678) 00:14:06.804 fused_ordering(679) 00:14:06.804 fused_ordering(680) 00:14:06.804 fused_ordering(681) 00:14:06.804 fused_ordering(682) 00:14:06.804 fused_ordering(683) 00:14:06.804 fused_ordering(684) 00:14:06.804 fused_ordering(685) 00:14:06.804 fused_ordering(686) 00:14:06.804 fused_ordering(687) 00:14:06.804 fused_ordering(688) 00:14:06.804 fused_ordering(689) 00:14:06.804 fused_ordering(690) 00:14:06.804 fused_ordering(691) 00:14:06.804 fused_ordering(692) 00:14:06.804 fused_ordering(693) 00:14:06.804 fused_ordering(694) 00:14:06.804 fused_ordering(695) 00:14:06.804 fused_ordering(696) 00:14:06.804 fused_ordering(697) 00:14:06.804 fused_ordering(698) 00:14:06.804 fused_ordering(699) 00:14:06.804 fused_ordering(700) 00:14:06.804 fused_ordering(701) 00:14:06.804 fused_ordering(702) 00:14:06.804 fused_ordering(703) 00:14:06.804 fused_ordering(704) 00:14:06.804 fused_ordering(705) 00:14:06.804 fused_ordering(706) 00:14:06.804 fused_ordering(707) 00:14:06.804 fused_ordering(708) 00:14:06.804 fused_ordering(709) 00:14:06.804 fused_ordering(710) 00:14:06.804 fused_ordering(711) 00:14:06.804 fused_ordering(712) 00:14:06.804 fused_ordering(713) 00:14:06.804 fused_ordering(714) 00:14:06.804 fused_ordering(715) 00:14:06.804 fused_ordering(716) 00:14:06.804 fused_ordering(717) 00:14:06.804 fused_ordering(718) 00:14:06.804 fused_ordering(719) 00:14:06.804 fused_ordering(720) 00:14:06.804 fused_ordering(721) 00:14:06.804 fused_ordering(722) 00:14:06.804 fused_ordering(723) 00:14:06.804 fused_ordering(724) 00:14:06.804 fused_ordering(725) 00:14:06.804 fused_ordering(726) 00:14:06.804 fused_ordering(727) 00:14:06.804 fused_ordering(728) 00:14:06.804 fused_ordering(729) 00:14:06.804 fused_ordering(730) 00:14:06.804 fused_ordering(731) 00:14:06.804 fused_ordering(732) 00:14:06.804 fused_ordering(733) 00:14:06.804 fused_ordering(734) 00:14:06.804 fused_ordering(735) 00:14:06.804 fused_ordering(736) 00:14:06.804 fused_ordering(737) 00:14:06.804 fused_ordering(738) 00:14:06.804 fused_ordering(739) 00:14:06.804 fused_ordering(740) 00:14:06.804 fused_ordering(741) 00:14:06.804 fused_ordering(742) 00:14:06.804 fused_ordering(743) 00:14:06.804 fused_ordering(744) 00:14:06.804 fused_ordering(745) 00:14:06.804 fused_ordering(746) 00:14:06.804 fused_ordering(747) 00:14:06.804 fused_ordering(748) 00:14:06.804 fused_ordering(749) 00:14:06.804 fused_ordering(750) 00:14:06.804 fused_ordering(751) 00:14:06.804 fused_ordering(752) 00:14:06.804 fused_ordering(753) 00:14:06.804 fused_ordering(754) 00:14:06.804 fused_ordering(755) 00:14:06.804 fused_ordering(756) 00:14:06.804 fused_ordering(757) 00:14:06.804 fused_ordering(758) 00:14:06.804 fused_ordering(759) 00:14:06.804 fused_ordering(760) 00:14:06.804 fused_ordering(761) 00:14:06.804 fused_ordering(762) 00:14:06.804 fused_ordering(763) 00:14:06.804 fused_ordering(764) 00:14:06.804 fused_ordering(765) 00:14:06.804 fused_ordering(766) 00:14:06.804 fused_ordering(767) 00:14:06.804 fused_ordering(768) 00:14:06.804 fused_ordering(769) 00:14:06.804 fused_ordering(770) 00:14:06.804 fused_ordering(771) 00:14:06.804 fused_ordering(772) 00:14:06.804 fused_ordering(773) 00:14:06.804 fused_ordering(774) 00:14:06.804 fused_ordering(775) 00:14:06.804 fused_ordering(776) 00:14:06.804 fused_ordering(777) 00:14:06.804 fused_ordering(778) 00:14:06.804 fused_ordering(779) 00:14:06.804 fused_ordering(780) 00:14:06.804 fused_ordering(781) 00:14:06.804 fused_ordering(782) 00:14:06.804 fused_ordering(783) 00:14:06.804 fused_ordering(784) 00:14:06.804 fused_ordering(785) 00:14:06.804 fused_ordering(786) 00:14:06.804 fused_ordering(787) 00:14:06.804 fused_ordering(788) 00:14:06.804 fused_ordering(789) 00:14:06.804 fused_ordering(790) 00:14:06.804 fused_ordering(791) 00:14:06.804 fused_ordering(792) 00:14:06.804 fused_ordering(793) 00:14:06.804 fused_ordering(794) 00:14:06.804 fused_ordering(795) 00:14:06.804 fused_ordering(796) 00:14:06.804 fused_ordering(797) 00:14:06.804 fused_ordering(798) 00:14:06.804 fused_ordering(799) 00:14:06.804 fused_ordering(800) 00:14:06.804 fused_ordering(801) 00:14:06.804 fused_ordering(802) 00:14:06.804 fused_ordering(803) 00:14:06.804 fused_ordering(804) 00:14:06.804 fused_ordering(805) 00:14:06.804 fused_ordering(806) 00:14:06.804 fused_ordering(807) 00:14:06.804 fused_ordering(808) 00:14:06.804 fused_ordering(809) 00:14:06.804 fused_ordering(810) 00:14:06.804 fused_ordering(811) 00:14:06.804 fused_ordering(812) 00:14:06.804 fused_ordering(813) 00:14:06.804 fused_ordering(814) 00:14:06.804 fused_ordering(815) 00:14:06.804 fused_ordering(816) 00:14:06.804 fused_ordering(817) 00:14:06.804 fused_ordering(818) 00:14:06.804 fused_ordering(819) 00:14:06.804 fused_ordering(820) 00:14:07.063 fused_ordering(821) 00:14:07.063 fused_ordering(822) 00:14:07.063 fused_ordering(823) 00:14:07.063 fused_ordering(824) 00:14:07.063 fused_ordering(825) 00:14:07.063 fused_ordering(826) 00:14:07.063 fused_ordering(827) 00:14:07.063 fused_ordering(828) 00:14:07.063 fused_ordering(829) 00:14:07.063 fused_ordering(830) 00:14:07.063 fused_ordering(831) 00:14:07.063 fused_ordering(832) 00:14:07.063 fused_ordering(833) 00:14:07.063 fused_ordering(834) 00:14:07.063 fused_ordering(835) 00:14:07.063 fused_ordering(836) 00:14:07.063 fused_ordering(837) 00:14:07.063 fused_ordering(838) 00:14:07.063 fused_ordering(839) 00:14:07.063 fused_ordering(840) 00:14:07.063 fused_ordering(841) 00:14:07.063 fused_ordering(842) 00:14:07.063 fused_ordering(843) 00:14:07.063 fused_ordering(844) 00:14:07.063 fused_ordering(845) 00:14:07.063 fused_ordering(846) 00:14:07.063 fused_ordering(847) 00:14:07.063 fused_ordering(848) 00:14:07.063 fused_ordering(849) 00:14:07.063 fused_ordering(850) 00:14:07.063 fused_ordering(851) 00:14:07.063 fused_ordering(852) 00:14:07.063 fused_ordering(853) 00:14:07.063 fused_ordering(854) 00:14:07.063 fused_ordering(855) 00:14:07.063 fused_ordering(856) 00:14:07.063 fused_ordering(857) 00:14:07.063 fused_ordering(858) 00:14:07.063 fused_ordering(859) 00:14:07.063 fused_ordering(860) 00:14:07.063 fused_ordering(861) 00:14:07.063 fused_ordering(862) 00:14:07.063 fused_ordering(863) 00:14:07.063 fused_ordering(864) 00:14:07.063 fused_ordering(865) 00:14:07.063 fused_ordering(866) 00:14:07.063 fused_ordering(867) 00:14:07.063 fused_ordering(868) 00:14:07.063 fused_ordering(869) 00:14:07.063 fused_ordering(870) 00:14:07.063 fused_ordering(871) 00:14:07.063 fused_ordering(872) 00:14:07.063 fused_ordering(873) 00:14:07.063 fused_ordering(874) 00:14:07.063 fused_ordering(875) 00:14:07.063 fused_ordering(876) 00:14:07.063 fused_ordering(877) 00:14:07.063 fused_ordering(878) 00:14:07.063 fused_ordering(879) 00:14:07.063 fused_ordering(880) 00:14:07.063 fused_ordering(881) 00:14:07.063 fused_ordering(882) 00:14:07.063 fused_ordering(883) 00:14:07.063 fused_ordering(884) 00:14:07.063 fused_ordering(885) 00:14:07.063 fused_ordering(886) 00:14:07.063 fused_ordering(887) 00:14:07.063 fused_ordering(888) 00:14:07.063 fused_ordering(889) 00:14:07.063 fused_ordering(890) 00:14:07.063 fused_ordering(891) 00:14:07.063 fused_ordering(892) 00:14:07.063 fused_ordering(893) 00:14:07.063 fused_ordering(894) 00:14:07.063 fused_ordering(895) 00:14:07.063 fused_ordering(896) 00:14:07.063 fused_ordering(897) 00:14:07.063 fused_ordering(898) 00:14:07.063 fused_ordering(899) 00:14:07.063 fused_ordering(900) 00:14:07.063 fused_ordering(901) 00:14:07.063 fused_ordering(902) 00:14:07.063 fused_ordering(903) 00:14:07.063 fused_ordering(904) 00:14:07.063 fused_ordering(905) 00:14:07.063 fused_ordering(906) 00:14:07.063 fused_ordering(907) 00:14:07.063 fused_ordering(908) 00:14:07.063 fused_ordering(909) 00:14:07.063 fused_ordering(910) 00:14:07.063 fused_ordering(911) 00:14:07.063 fused_ordering(912) 00:14:07.063 fused_ordering(913) 00:14:07.063 fused_ordering(914) 00:14:07.063 fused_ordering(915) 00:14:07.063 fused_ordering(916) 00:14:07.063 fused_ordering(917) 00:14:07.063 fused_ordering(918) 00:14:07.063 fused_ordering(919) 00:14:07.063 fused_ordering(920) 00:14:07.063 fused_ordering(921) 00:14:07.063 fused_ordering(922) 00:14:07.063 fused_ordering(923) 00:14:07.063 fused_ordering(924) 00:14:07.063 fused_ordering(925) 00:14:07.063 fused_ordering(926) 00:14:07.063 fused_ordering(927) 00:14:07.063 fused_ordering(928) 00:14:07.063 fused_ordering(929) 00:14:07.063 fused_ordering(930) 00:14:07.063 fused_ordering(931) 00:14:07.063 fused_ordering(932) 00:14:07.063 fused_ordering(933) 00:14:07.063 fused_ordering(934) 00:14:07.063 fused_ordering(935) 00:14:07.063 fused_ordering(936) 00:14:07.063 fused_ordering(937) 00:14:07.063 fused_ordering(938) 00:14:07.063 fused_ordering(939) 00:14:07.063 fused_ordering(940) 00:14:07.063 fused_ordering(941) 00:14:07.063 fused_ordering(942) 00:14:07.063 fused_ordering(943) 00:14:07.063 fused_ordering(944) 00:14:07.063 fused_ordering(945) 00:14:07.063 fused_ordering(946) 00:14:07.063 fused_ordering(947) 00:14:07.063 fused_ordering(948) 00:14:07.063 fused_ordering(949) 00:14:07.064 fused_ordering(950) 00:14:07.064 fused_ordering(951) 00:14:07.064 fused_ordering(952) 00:14:07.064 fused_ordering(953) 00:14:07.064 fused_ordering(954) 00:14:07.064 fused_ordering(955) 00:14:07.064 fused_ordering(956) 00:14:07.064 fused_ordering(957) 00:14:07.064 fused_ordering(958) 00:14:07.064 fused_ordering(959) 00:14:07.064 fused_ordering(960) 00:14:07.064 fused_ordering(961) 00:14:07.064 fused_ordering(962) 00:14:07.064 fused_ordering(963) 00:14:07.064 fused_ordering(964) 00:14:07.064 fused_ordering(965) 00:14:07.064 fused_ordering(966) 00:14:07.064 fused_ordering(967) 00:14:07.064 fused_ordering(968) 00:14:07.064 fused_ordering(969) 00:14:07.064 fused_ordering(970) 00:14:07.064 fused_ordering(971) 00:14:07.064 fused_ordering(972) 00:14:07.064 fused_ordering(973) 00:14:07.064 fused_ordering(974) 00:14:07.064 fused_ordering(975) 00:14:07.064 fused_ordering(976) 00:14:07.064 fused_ordering(977) 00:14:07.064 fused_ordering(978) 00:14:07.064 fused_ordering(979) 00:14:07.064 fused_ordering(980) 00:14:07.064 fused_ordering(981) 00:14:07.064 fused_ordering(982) 00:14:07.064 fused_ordering(983) 00:14:07.064 fused_ordering(984) 00:14:07.064 fused_ordering(985) 00:14:07.064 fused_ordering(986) 00:14:07.064 fused_ordering(987) 00:14:07.064 fused_ordering(988) 00:14:07.064 fused_ordering(989) 00:14:07.064 fused_ordering(990) 00:14:07.064 fused_ordering(991) 00:14:07.064 fused_ordering(992) 00:14:07.064 fused_ordering(993) 00:14:07.064 fused_ordering(994) 00:14:07.064 fused_ordering(995) 00:14:07.064 fused_ordering(996) 00:14:07.064 fused_ordering(997) 00:14:07.064 fused_ordering(998) 00:14:07.064 fused_ordering(999) 00:14:07.064 fused_ordering(1000) 00:14:07.064 fused_ordering(1001) 00:14:07.064 fused_ordering(1002) 00:14:07.064 fused_ordering(1003) 00:14:07.064 fused_ordering(1004) 00:14:07.064 fused_ordering(1005) 00:14:07.064 fused_ordering(1006) 00:14:07.064 fused_ordering(1007) 00:14:07.064 fused_ordering(1008) 00:14:07.064 fused_ordering(1009) 00:14:07.064 fused_ordering(1010) 00:14:07.064 fused_ordering(1011) 00:14:07.064 fused_ordering(1012) 00:14:07.064 fused_ordering(1013) 00:14:07.064 fused_ordering(1014) 00:14:07.064 fused_ordering(1015) 00:14:07.064 fused_ordering(1016) 00:14:07.064 fused_ordering(1017) 00:14:07.064 fused_ordering(1018) 00:14:07.064 fused_ordering(1019) 00:14:07.064 fused_ordering(1020) 00:14:07.064 fused_ordering(1021) 00:14:07.064 fused_ordering(1022) 00:14:07.064 fused_ordering(1023) 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:07.064 rmmod nvme_tcp 00:14:07.064 rmmod nvme_fabrics 00:14:07.064 rmmod nvme_keyring 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3370899 ']' 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3370899 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 3370899 ']' 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 3370899 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:07.064 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3370899 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3370899' 00:14:07.323 killing process with pid 3370899 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 3370899 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 3370899 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.323 18:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:09.858 00:14:09.858 real 0m10.252s 00:14:09.858 user 0m5.081s 00:14:09.858 sys 0m5.233s 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:09.858 ************************************ 00:14:09.858 END TEST nvmf_fused_ordering 00:14:09.858 ************************************ 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:09.858 ************************************ 00:14:09.858 START TEST nvmf_ns_masking 00:14:09.858 ************************************ 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:09.858 * Looking for test storage... 00:14:09.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:09.858 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5b17be65-0da9-4362-84ad-e67e917eb880 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=7a40de9c-09cc-48d6-80ed-828ef89d27a0 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0eccdc90-d98b-4de9-8642-160a295a7c9e 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:09.859 18:09:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:15.133 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:15.133 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.133 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:15.134 Found net devices under 0000:86:00.0: cvl_0_0 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:15.134 Found net devices under 0000:86:00.1: cvl_0_1 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:15.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:14:15.134 00:14:15.134 --- 10.0.0.2 ping statistics --- 00:14:15.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.134 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:14:15.134 00:14:15.134 --- 10.0.0.1 ping statistics --- 00:14:15.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.134 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3374909 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3374909 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3374909 ']' 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.134 18:09:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:15.134 [2024-07-24 18:09:07.753348] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:14:15.134 [2024-07-24 18:09:07.753390] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.134 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.134 [2024-07-24 18:09:07.810478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.134 [2024-07-24 18:09:07.888673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.134 [2024-07-24 18:09:07.888710] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.134 [2024-07-24 18:09:07.888717] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.134 [2024-07-24 18:09:07.888723] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.134 [2024-07-24 18:09:07.888727] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.134 [2024-07-24 18:09:07.888742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:15.703 [2024-07-24 18:09:08.727214] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:15.703 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:15.962 Malloc1 00:14:15.962 18:09:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:16.220 Malloc2 00:14:16.220 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:16.220 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:16.479 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:16.737 [2024-07-24 18:09:09.601890] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:16.737 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:16.738 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0eccdc90-d98b-4de9-8642-160a295a7c9e -a 10.0.0.2 -s 4420 -i 4 00:14:16.996 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:16.996 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:16.996 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:16.996 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:16.996 18:09:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.898 [ 0]:0x1 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.898 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.157 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=88faf5688c5c43be9c52964482f14c01 00:14:19.157 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 88faf5688c5c43be9c52964482f14c01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.157 18:09:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.157 [ 0]:0x1 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=88faf5688c5c43be9c52964482f14c01 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 88faf5688c5c43be9c52964482f14c01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.157 [ 1]:0x2 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.157 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.415 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fb3414c863f447c4b9e4ee85af9b0933 00:14:19.415 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fb3414c863f447c4b9e4ee85af9b0933 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.415 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:19.415 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:19.415 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.415 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:19.673 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:19.673 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:19.673 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0eccdc90-d98b-4de9-8642-160a295a7c9e -a 10.0.0.2 -s 4420 -i 4 00:14:19.931 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:19.931 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:19.931 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.931 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:19.931 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:19.931 18:09:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.833 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.091 18:09:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.091 [ 0]:0x2 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fb3414c863f447c4b9e4ee85af9b0933 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fb3414c863f447c4b9e4ee85af9b0933 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.091 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.350 [ 0]:0x1 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=88faf5688c5c43be9c52964482f14c01 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 88faf5688c5c43be9c52964482f14c01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.350 [ 1]:0x2 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fb3414c863f447c4b9e4ee85af9b0933 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fb3414c863f447c4b9e4ee85af9b0933 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.350 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:22.608 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.609 [ 0]:0x2 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fb3414c863f447c4b9e4ee85af9b0933 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fb3414c863f447c4b9e4ee85af9b0933 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.609 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:22.867 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:22.867 18:09:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0eccdc90-d98b-4de9-8642-160a295a7c9e -a 10.0.0.2 -s 4420 -i 4 00:14:23.125 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.125 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:23.125 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.125 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:23.125 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:23.125 18:09:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.024 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.282 [ 0]:0x1 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=88faf5688c5c43be9c52964482f14c01 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 88faf5688c5c43be9c52964482f14c01 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.282 [ 1]:0x2 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fb3414c863f447c4b9e4ee85af9b0933 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fb3414c863f447c4b9e4ee85af9b0933 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.282 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.542 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:25.542 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:25.542 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.543 [ 0]:0x2 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fb3414c863f447c4b9e4ee85af9b0933 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fb3414c863f447c4b9e4ee85af9b0933 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:25.543 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:25.838 [2024-07-24 18:09:18.643540] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:25.838 request: 00:14:25.838 { 00:14:25.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.838 "nsid": 2, 00:14:25.838 "host": "nqn.2016-06.io.spdk:host1", 00:14:25.838 "method": "nvmf_ns_remove_host", 00:14:25.838 "req_id": 1 00:14:25.838 } 00:14:25.838 Got JSON-RPC error response 00:14:25.838 response: 00:14:25.838 { 00:14:25.838 "code": -32602, 00:14:25.838 "message": "Invalid parameters" 00:14:25.838 } 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.838 [ 0]:0x2 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fb3414c863f447c4b9e4ee85af9b0933 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fb3414c863f447c4b9e4ee85af9b0933 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:25.838 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3377216 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3377216 /var/tmp/host.sock 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 3377216 ']' 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:25.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.839 18:09:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:25.839 [2024-07-24 18:09:18.898057] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:14:25.839 [2024-07-24 18:09:18.898108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377216 ] 00:14:26.097 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.098 [2024-07-24 18:09:18.955190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.098 [2024-07-24 18:09:19.033794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:26.665 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.666 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:26.666 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:26.924 18:09:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:27.183 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5b17be65-0da9-4362-84ad-e67e917eb880 00:14:27.183 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:27.183 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5B17BE650DA9436284ADE67E917EB880 -i 00:14:27.183 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 7a40de9c-09cc-48d6-80ed-828ef89d27a0 00:14:27.183 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:27.183 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 7A40DE9C09CC48D680ED828EF89D27A0 -i 00:14:27.441 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:27.700 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:27.700 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:27.700 18:09:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:28.265 nvme0n1 00:14:28.265 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:28.265 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:28.523 nvme1n2 00:14:28.523 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:28.523 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:28.523 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:28.523 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:28.523 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:28.781 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:28.781 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:28.781 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:28.781 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:29.040 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5b17be65-0da9-4362-84ad-e67e917eb880 == \5\b\1\7\b\e\6\5\-\0\d\a\9\-\4\3\6\2\-\8\4\a\d\-\e\6\7\e\9\1\7\e\b\8\8\0 ]] 00:14:29.040 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:29.040 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:29.040 18:09:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:29.040 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 7a40de9c-09cc-48d6-80ed-828ef89d27a0 == \7\a\4\0\d\e\9\c\-\0\9\c\c\-\4\8\d\6\-\8\0\e\d\-\8\2\8\e\f\8\9\d\2\7\a\0 ]] 00:14:29.040 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3377216 00:14:29.040 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3377216 ']' 00:14:29.040 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3377216 00:14:29.040 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:29.040 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.040 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3377216 00:14:29.298 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:29.298 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:29.298 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3377216' 00:14:29.298 killing process with pid 3377216 00:14:29.298 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3377216 00:14:29.298 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3377216 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:29.557 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:29.557 rmmod nvme_tcp 00:14:29.815 rmmod nvme_fabrics 00:14:29.815 rmmod nvme_keyring 00:14:29.815 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:29.815 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:29.815 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3374909 ']' 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3374909 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 3374909 ']' 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 3374909 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3374909 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3374909' 00:14:29.816 killing process with pid 3374909 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 3374909 00:14:29.816 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 3374909 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:30.075 18:09:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:31.982 00:14:31.982 real 0m22.521s 00:14:31.982 user 0m24.393s 00:14:31.982 sys 0m6.008s 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:31.982 ************************************ 00:14:31.982 END TEST nvmf_ns_masking 00:14:31.982 ************************************ 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:31.982 18:09:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:32.243 ************************************ 00:14:32.243 START TEST nvmf_nvme_cli 00:14:32.243 ************************************ 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:32.243 * Looking for test storage... 00:14:32.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:32.243 18:09:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:37.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:37.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.518 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:37.519 Found net devices under 0000:86:00.0: cvl_0_0 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:37.519 Found net devices under 0000:86:00.1: cvl_0_1 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:37.519 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:37.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:37.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:14:37.778 00:14:37.778 --- 10.0.0.2 ping statistics --- 00:14:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.778 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:37.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:37.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:14:37.778 00:14:37.778 --- 10.0.0.1 ping statistics --- 00:14:37.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:37.778 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3381417 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3381417 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 3381417 ']' 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:37.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.778 18:09:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:37.778 [2024-07-24 18:09:30.777589] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:14:37.778 [2024-07-24 18:09:30.777631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:37.778 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.778 [2024-07-24 18:09:30.838063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.036 [2024-07-24 18:09:30.918511] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.036 [2024-07-24 18:09:30.918560] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.036 [2024-07-24 18:09:30.918567] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.036 [2024-07-24 18:09:30.918572] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.036 [2024-07-24 18:09:30.918577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.036 [2024-07-24 18:09:30.918612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.036 [2024-07-24 18:09:30.918713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.036 [2024-07-24 18:09:30.918729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.036 [2024-07-24 18:09:30.918735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.603 [2024-07-24 18:09:31.633743] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.603 Malloc0 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.603 Malloc1 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.603 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 [2024-07-24 18:09:31.715364] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:38.862 00:14:38.862 Discovery Log Number of Records 2, Generation counter 2 00:14:38.862 =====Discovery Log Entry 0====== 00:14:38.862 trtype: tcp 00:14:38.862 adrfam: ipv4 00:14:38.862 subtype: current discovery subsystem 00:14:38.862 treq: not required 00:14:38.862 portid: 0 00:14:38.862 trsvcid: 4420 00:14:38.862 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:38.862 traddr: 10.0.0.2 00:14:38.862 eflags: explicit discovery connections, duplicate discovery information 00:14:38.862 sectype: none 00:14:38.862 =====Discovery Log Entry 1====== 00:14:38.862 trtype: tcp 00:14:38.862 adrfam: ipv4 00:14:38.862 subtype: nvme subsystem 00:14:38.862 treq: not required 00:14:38.862 portid: 0 00:14:38.862 trsvcid: 4420 00:14:38.862 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:38.862 traddr: 10.0.0.2 00:14:38.862 eflags: none 00:14:38.862 sectype: none 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:38.862 18:09:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:40.238 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:40.238 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:40.238 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.238 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:40.238 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:40.238 18:09:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:42.137 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:42.137 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:42.137 18:09:34 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:42.137 /dev/nvme0n1 ]] 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.137 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:42.395 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:42.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:42.653 rmmod nvme_tcp 00:14:42.653 rmmod nvme_fabrics 00:14:42.653 rmmod nvme_keyring 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3381417 ']' 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3381417 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 3381417 ']' 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 3381417 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3381417 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3381417' 00:14:42.653 killing process with pid 3381417 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 3381417 00:14:42.653 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 3381417 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.912 18:09:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:45.448 00:14:45.448 real 0m12.935s 00:14:45.448 user 0m21.548s 00:14:45.448 sys 0m4.764s 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.448 ************************************ 00:14:45.448 END TEST nvmf_nvme_cli 00:14:45.448 ************************************ 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:45.448 ************************************ 00:14:45.448 START TEST nvmf_vfio_user 00:14:45.448 ************************************ 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:45.448 * Looking for test storage... 00:14:45.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.448 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3382704 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3382704' 00:14:45.449 Process pid: 3382704 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3382704 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3382704 ']' 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:45.449 18:09:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:45.449 [2024-07-24 18:09:38.265400] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:14:45.449 [2024-07-24 18:09:38.265447] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.449 EAL: No free 2048 kB hugepages reported on node 1 00:14:45.449 [2024-07-24 18:09:38.317182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.449 [2024-07-24 18:09:38.398366] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.449 [2024-07-24 18:09:38.398404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.449 [2024-07-24 18:09:38.398410] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.449 [2024-07-24 18:09:38.398417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.449 [2024-07-24 18:09:38.398421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.449 [2024-07-24 18:09:38.398494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.449 [2024-07-24 18:09:38.398514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.449 [2024-07-24 18:09:38.398599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.449 [2024-07-24 18:09:38.398599] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.016 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:46.016 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:46.016 18:09:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:47.388 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:47.388 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:47.388 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:47.388 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:47.388 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:47.388 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:47.388 Malloc1 00:14:47.388 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:47.646 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:47.946 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:48.209 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.209 18:09:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:48.209 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:48.209 Malloc2 00:14:48.209 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:48.468 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:48.727 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:48.727 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:48.727 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:48.727 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.727 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:48.727 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:48.727 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:48.727 [2024-07-24 18:09:41.782485] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:14:48.727 [2024-07-24 18:09:41.782532] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383410 ] 00:14:48.727 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.988 [2024-07-24 18:09:41.813418] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:48.988 [2024-07-24 18:09:41.822821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:48.988 [2024-07-24 18:09:41.822839] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fed5bf2d000 00:14:48.988 [2024-07-24 18:09:41.823823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.824823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.825823] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.826835] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.827845] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.828850] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.829852] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.830861] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:48.988 [2024-07-24 18:09:41.831863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:48.988 [2024-07-24 18:09:41.831871] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fed5bf22000 00:14:48.988 [2024-07-24 18:09:41.832787] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:48.988 [2024-07-24 18:09:41.845244] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:48.988 [2024-07-24 18:09:41.845270] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:48.988 [2024-07-24 18:09:41.847962] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:48.988 [2024-07-24 18:09:41.848004] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:48.988 [2024-07-24 18:09:41.848077] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:48.988 [2024-07-24 18:09:41.848093] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:48.988 [2024-07-24 18:09:41.848098] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:48.988 [2024-07-24 18:09:41.848954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:48.988 [2024-07-24 18:09:41.848964] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:48.988 [2024-07-24 18:09:41.848971] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:48.988 [2024-07-24 18:09:41.849966] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:48.988 [2024-07-24 18:09:41.849974] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:48.988 [2024-07-24 18:09:41.849983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:48.988 [2024-07-24 18:09:41.850968] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:48.989 [2024-07-24 18:09:41.850975] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:48.989 [2024-07-24 18:09:41.851978] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:48.989 [2024-07-24 18:09:41.851985] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:48.989 [2024-07-24 18:09:41.851990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:48.989 [2024-07-24 18:09:41.851995] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:48.989 [2024-07-24 18:09:41.852100] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:48.989 [2024-07-24 18:09:41.852105] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:48.989 [2024-07-24 18:09:41.852109] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:48.989 [2024-07-24 18:09:41.852982] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:48.989 [2024-07-24 18:09:41.853987] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:48.989 [2024-07-24 18:09:41.854996] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:48.989 [2024-07-24 18:09:41.855999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:48.989 [2024-07-24 18:09:41.856076] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:48.989 [2024-07-24 18:09:41.857015] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:48.989 [2024-07-24 18:09:41.857022] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:48.989 [2024-07-24 18:09:41.857027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857044] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:48.989 [2024-07-24 18:09:41.857051] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857064] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.989 [2024-07-24 18:09:41.857069] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.989 [2024-07-24 18:09:41.857072] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.989 [2024-07-24 18:09:41.857084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.989 [2024-07-24 18:09:41.857127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:48.989 [2024-07-24 18:09:41.857138] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:48.989 [2024-07-24 18:09:41.857142] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:48.989 [2024-07-24 18:09:41.857146] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:48.989 [2024-07-24 18:09:41.857150] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:48.989 [2024-07-24 18:09:41.857154] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:48.989 [2024-07-24 18:09:41.857158] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:48.989 [2024-07-24 18:09:41.857162] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857181] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:48.989 [2024-07-24 18:09:41.857196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:48.989 [2024-07-24 18:09:41.857207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.989 [2024-07-24 18:09:41.857215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.989 [2024-07-24 18:09:41.857221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.989 [2024-07-24 18:09:41.857228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:48.989 [2024-07-24 18:09:41.857232] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:48.989 [2024-07-24 18:09:41.857257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:48.989 [2024-07-24 18:09:41.857262] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:48.989 [2024-07-24 18:09:41.857267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857286] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:48.989 [2024-07-24 18:09:41.857296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:48.989 [2024-07-24 18:09:41.857344] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857354] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857361] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:48.989 [2024-07-24 18:09:41.857364] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:48.989 [2024-07-24 18:09:41.857367] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.989 [2024-07-24 18:09:41.857373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:48.989 [2024-07-24 18:09:41.857389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:48.989 [2024-07-24 18:09:41.857397] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:48.989 [2024-07-24 18:09:41.857407] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857420] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.989 [2024-07-24 18:09:41.857424] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.989 [2024-07-24 18:09:41.857427] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.989 [2024-07-24 18:09:41.857432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.989 [2024-07-24 18:09:41.857452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:48.989 [2024-07-24 18:09:41.857464] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857476] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:48.989 [2024-07-24 18:09:41.857480] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.989 [2024-07-24 18:09:41.857483] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.989 [2024-07-24 18:09:41.857488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.989 [2024-07-24 18:09:41.857500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:48.989 [2024-07-24 18:09:41.857507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857513] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857527] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857537] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:48.989 [2024-07-24 18:09:41.857542] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:48.989 [2024-07-24 18:09:41.857546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:48.990 [2024-07-24 18:09:41.857550] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:48.990 [2024-07-24 18:09:41.857567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:48.990 [2024-07-24 18:09:41.857577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:48.990 [2024-07-24 18:09:41.857588] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:48.990 [2024-07-24 18:09:41.857595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:48.990 [2024-07-24 18:09:41.857605] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:48.990 [2024-07-24 18:09:41.857616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:48.990 [2024-07-24 18:09:41.857626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:48.990 [2024-07-24 18:09:41.857634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:48.990 [2024-07-24 18:09:41.857645] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:48.990 [2024-07-24 18:09:41.857650] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:48.990 [2024-07-24 18:09:41.857653] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:48.990 [2024-07-24 18:09:41.857656] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:48.990 [2024-07-24 18:09:41.857659] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:48.990 [2024-07-24 18:09:41.857664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:48.990 [2024-07-24 18:09:41.857671] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:48.990 [2024-07-24 18:09:41.857675] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:48.990 [2024-07-24 18:09:41.857678] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.990 [2024-07-24 18:09:41.857683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:48.990 [2024-07-24 18:09:41.857689] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:48.990 [2024-07-24 18:09:41.857693] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:48.990 [2024-07-24 18:09:41.857696] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.990 [2024-07-24 18:09:41.857701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:48.990 [2024-07-24 18:09:41.857707] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:48.990 [2024-07-24 18:09:41.857713] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:48.990 [2024-07-24 18:09:41.857716] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:48.990 [2024-07-24 18:09:41.857721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:48.990 [2024-07-24 18:09:41.857728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:48.990 [2024-07-24 18:09:41.857740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:48.990 [2024-07-24 18:09:41.857750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:48.990 [2024-07-24 18:09:41.857756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:48.990 ===================================================== 00:14:48.990 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:48.990 ===================================================== 00:14:48.990 Controller Capabilities/Features 00:14:48.990 ================================ 00:14:48.990 Vendor ID: 4e58 00:14:48.990 Subsystem Vendor ID: 4e58 00:14:48.990 Serial Number: SPDK1 00:14:48.990 Model Number: SPDK bdev Controller 00:14:48.990 Firmware Version: 24.09 00:14:48.990 Recommended Arb Burst: 6 00:14:48.990 IEEE OUI Identifier: 8d 6b 50 00:14:48.990 Multi-path I/O 00:14:48.990 May have multiple subsystem ports: Yes 00:14:48.990 May have multiple controllers: Yes 00:14:48.990 Associated with SR-IOV VF: No 00:14:48.990 Max Data Transfer Size: 131072 00:14:48.990 Max Number of Namespaces: 32 00:14:48.990 Max Number of I/O Queues: 127 00:14:48.990 NVMe Specification Version (VS): 1.3 00:14:48.990 NVMe Specification Version (Identify): 1.3 00:14:48.990 Maximum Queue Entries: 256 00:14:48.990 Contiguous Queues Required: Yes 00:14:48.990 Arbitration Mechanisms Supported 00:14:48.990 Weighted Round Robin: Not Supported 00:14:48.990 Vendor Specific: Not Supported 00:14:48.990 Reset Timeout: 15000 ms 00:14:48.990 Doorbell Stride: 4 bytes 00:14:48.990 NVM Subsystem Reset: Not Supported 00:14:48.990 Command Sets Supported 00:14:48.990 NVM Command Set: Supported 00:14:48.990 Boot Partition: Not Supported 00:14:48.990 Memory Page Size Minimum: 4096 bytes 00:14:48.990 Memory Page Size Maximum: 4096 bytes 00:14:48.990 Persistent Memory Region: Not Supported 00:14:48.990 Optional Asynchronous Events Supported 00:14:48.990 Namespace Attribute Notices: Supported 00:14:48.990 Firmware Activation Notices: Not Supported 00:14:48.990 ANA Change Notices: Not Supported 00:14:48.990 PLE Aggregate Log Change Notices: Not Supported 00:14:48.990 LBA Status Info Alert Notices: Not Supported 00:14:48.990 EGE Aggregate Log Change Notices: Not Supported 00:14:48.990 Normal NVM Subsystem Shutdown event: Not Supported 00:14:48.990 Zone Descriptor Change Notices: Not Supported 00:14:48.990 Discovery Log Change Notices: Not Supported 00:14:48.990 Controller Attributes 00:14:48.990 128-bit Host Identifier: Supported 00:14:48.990 Non-Operational Permissive Mode: Not Supported 00:14:48.990 NVM Sets: Not Supported 00:14:48.990 Read Recovery Levels: Not Supported 00:14:48.990 Endurance Groups: Not Supported 00:14:48.990 Predictable Latency Mode: Not Supported 00:14:48.990 Traffic Based Keep ALive: Not Supported 00:14:48.990 Namespace Granularity: Not Supported 00:14:48.990 SQ Associations: Not Supported 00:14:48.990 UUID List: Not Supported 00:14:48.990 Multi-Domain Subsystem: Not Supported 00:14:48.990 Fixed Capacity Management: Not Supported 00:14:48.990 Variable Capacity Management: Not Supported 00:14:48.990 Delete Endurance Group: Not Supported 00:14:48.990 Delete NVM Set: Not Supported 00:14:48.990 Extended LBA Formats Supported: Not Supported 00:14:48.990 Flexible Data Placement Supported: Not Supported 00:14:48.990 00:14:48.990 Controller Memory Buffer Support 00:14:48.990 ================================ 00:14:48.990 Supported: No 00:14:48.990 00:14:48.990 Persistent Memory Region Support 00:14:48.990 ================================ 00:14:48.990 Supported: No 00:14:48.990 00:14:48.990 Admin Command Set Attributes 00:14:48.990 ============================ 00:14:48.990 Security Send/Receive: Not Supported 00:14:48.990 Format NVM: Not Supported 00:14:48.990 Firmware Activate/Download: Not Supported 00:14:48.990 Namespace Management: Not Supported 00:14:48.990 Device Self-Test: Not Supported 00:14:48.990 Directives: Not Supported 00:14:48.990 NVMe-MI: Not Supported 00:14:48.990 Virtualization Management: Not Supported 00:14:48.990 Doorbell Buffer Config: Not Supported 00:14:48.990 Get LBA Status Capability: Not Supported 00:14:48.990 Command & Feature Lockdown Capability: Not Supported 00:14:48.990 Abort Command Limit: 4 00:14:48.990 Async Event Request Limit: 4 00:14:48.990 Number of Firmware Slots: N/A 00:14:48.990 Firmware Slot 1 Read-Only: N/A 00:14:48.990 Firmware Activation Without Reset: N/A 00:14:48.990 Multiple Update Detection Support: N/A 00:14:48.990 Firmware Update Granularity: No Information Provided 00:14:48.990 Per-Namespace SMART Log: No 00:14:48.990 Asymmetric Namespace Access Log Page: Not Supported 00:14:48.990 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:48.990 Command Effects Log Page: Supported 00:14:48.990 Get Log Page Extended Data: Supported 00:14:48.990 Telemetry Log Pages: Not Supported 00:14:48.990 Persistent Event Log Pages: Not Supported 00:14:48.990 Supported Log Pages Log Page: May Support 00:14:48.990 Commands Supported & Effects Log Page: Not Supported 00:14:48.990 Feature Identifiers & Effects Log Page:May Support 00:14:48.990 NVMe-MI Commands & Effects Log Page: May Support 00:14:48.990 Data Area 4 for Telemetry Log: Not Supported 00:14:48.990 Error Log Page Entries Supported: 128 00:14:48.990 Keep Alive: Supported 00:14:48.990 Keep Alive Granularity: 10000 ms 00:14:48.990 00:14:48.990 NVM Command Set Attributes 00:14:48.990 ========================== 00:14:48.990 Submission Queue Entry Size 00:14:48.990 Max: 64 00:14:48.990 Min: 64 00:14:48.990 Completion Queue Entry Size 00:14:48.990 Max: 16 00:14:48.990 Min: 16 00:14:48.990 Number of Namespaces: 32 00:14:48.990 Compare Command: Supported 00:14:48.990 Write Uncorrectable Command: Not Supported 00:14:48.990 Dataset Management Command: Supported 00:14:48.990 Write Zeroes Command: Supported 00:14:48.991 Set Features Save Field: Not Supported 00:14:48.991 Reservations: Not Supported 00:14:48.991 Timestamp: Not Supported 00:14:48.991 Copy: Supported 00:14:48.991 Volatile Write Cache: Present 00:14:48.991 Atomic Write Unit (Normal): 1 00:14:48.991 Atomic Write Unit (PFail): 1 00:14:48.991 Atomic Compare & Write Unit: 1 00:14:48.991 Fused Compare & Write: Supported 00:14:48.991 Scatter-Gather List 00:14:48.991 SGL Command Set: Supported (Dword aligned) 00:14:48.991 SGL Keyed: Not Supported 00:14:48.991 SGL Bit Bucket Descriptor: Not Supported 00:14:48.991 SGL Metadata Pointer: Not Supported 00:14:48.991 Oversized SGL: Not Supported 00:14:48.991 SGL Metadata Address: Not Supported 00:14:48.991 SGL Offset: Not Supported 00:14:48.991 Transport SGL Data Block: Not Supported 00:14:48.991 Replay Protected Memory Block: Not Supported 00:14:48.991 00:14:48.991 Firmware Slot Information 00:14:48.991 ========================= 00:14:48.991 Active slot: 1 00:14:48.991 Slot 1 Firmware Revision: 24.09 00:14:48.991 00:14:48.991 00:14:48.991 Commands Supported and Effects 00:14:48.991 ============================== 00:14:48.991 Admin Commands 00:14:48.991 -------------- 00:14:48.991 Get Log Page (02h): Supported 00:14:48.991 Identify (06h): Supported 00:14:48.991 Abort (08h): Supported 00:14:48.991 Set Features (09h): Supported 00:14:48.991 Get Features (0Ah): Supported 00:14:48.991 Asynchronous Event Request (0Ch): Supported 00:14:48.991 Keep Alive (18h): Supported 00:14:48.991 I/O Commands 00:14:48.991 ------------ 00:14:48.991 Flush (00h): Supported LBA-Change 00:14:48.991 Write (01h): Supported LBA-Change 00:14:48.991 Read (02h): Supported 00:14:48.991 Compare (05h): Supported 00:14:48.991 Write Zeroes (08h): Supported LBA-Change 00:14:48.991 Dataset Management (09h): Supported LBA-Change 00:14:48.991 Copy (19h): Supported LBA-Change 00:14:48.991 00:14:48.991 Error Log 00:14:48.991 ========= 00:14:48.991 00:14:48.991 Arbitration 00:14:48.991 =========== 00:14:48.991 Arbitration Burst: 1 00:14:48.991 00:14:48.991 Power Management 00:14:48.991 ================ 00:14:48.991 Number of Power States: 1 00:14:48.991 Current Power State: Power State #0 00:14:48.991 Power State #0: 00:14:48.991 Max Power: 0.00 W 00:14:48.991 Non-Operational State: Operational 00:14:48.991 Entry Latency: Not Reported 00:14:48.991 Exit Latency: Not Reported 00:14:48.991 Relative Read Throughput: 0 00:14:48.991 Relative Read Latency: 0 00:14:48.991 Relative Write Throughput: 0 00:14:48.991 Relative Write Latency: 0 00:14:48.991 Idle Power: Not Reported 00:14:48.991 Active Power: Not Reported 00:14:48.991 Non-Operational Permissive Mode: Not Supported 00:14:48.991 00:14:48.991 Health Information 00:14:48.991 ================== 00:14:48.991 Critical Warnings: 00:14:48.991 Available Spare Space: OK 00:14:48.991 Temperature: OK 00:14:48.991 Device Reliability: OK 00:14:48.991 Read Only: No 00:14:48.991 Volatile Memory Backup: OK 00:14:48.991 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:48.991 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:48.991 Available Spare: 0% 00:14:48.991 Available Sp[2024-07-24 18:09:41.857841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:48.991 [2024-07-24 18:09:41.857848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:48.991 [2024-07-24 18:09:41.857871] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:48.991 [2024-07-24 18:09:41.857880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.991 [2024-07-24 18:09:41.857885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.991 [2024-07-24 18:09:41.857891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.991 [2024-07-24 18:09:41.857896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:48.991 [2024-07-24 18:09:41.859500] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:48.991 [2024-07-24 18:09:41.859511] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:48.991 [2024-07-24 18:09:41.860030] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.991 [2024-07-24 18:09:41.860076] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:48.991 [2024-07-24 18:09:41.860082] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:48.991 [2024-07-24 18:09:41.861039] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:48.991 [2024-07-24 18:09:41.861049] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:48.991 [2024-07-24 18:09:41.861098] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:48.991 [2024-07-24 18:09:41.862062] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:48.991 are Threshold: 0% 00:14:48.991 Life Percentage Used: 0% 00:14:48.991 Data Units Read: 0 00:14:48.991 Data Units Written: 0 00:14:48.991 Host Read Commands: 0 00:14:48.991 Host Write Commands: 0 00:14:48.991 Controller Busy Time: 0 minutes 00:14:48.991 Power Cycles: 0 00:14:48.991 Power On Hours: 0 hours 00:14:48.991 Unsafe Shutdowns: 0 00:14:48.991 Unrecoverable Media Errors: 0 00:14:48.991 Lifetime Error Log Entries: 0 00:14:48.991 Warning Temperature Time: 0 minutes 00:14:48.991 Critical Temperature Time: 0 minutes 00:14:48.991 00:14:48.991 Number of Queues 00:14:48.991 ================ 00:14:48.991 Number of I/O Submission Queues: 127 00:14:48.991 Number of I/O Completion Queues: 127 00:14:48.991 00:14:48.991 Active Namespaces 00:14:48.991 ================= 00:14:48.991 Namespace ID:1 00:14:48.991 Error Recovery Timeout: Unlimited 00:14:48.991 Command Set Identifier: NVM (00h) 00:14:48.991 Deallocate: Supported 00:14:48.991 Deallocated/Unwritten Error: Not Supported 00:14:48.991 Deallocated Read Value: Unknown 00:14:48.991 Deallocate in Write Zeroes: Not Supported 00:14:48.991 Deallocated Guard Field: 0xFFFF 00:14:48.991 Flush: Supported 00:14:48.991 Reservation: Supported 00:14:48.991 Namespace Sharing Capabilities: Multiple Controllers 00:14:48.991 Size (in LBAs): 131072 (0GiB) 00:14:48.991 Capacity (in LBAs): 131072 (0GiB) 00:14:48.991 Utilization (in LBAs): 131072 (0GiB) 00:14:48.991 NGUID: 0DD4689A07CE46F4B1875348BF43883B 00:14:48.991 UUID: 0dd4689a-07ce-46f4-b187-5348bf43883b 00:14:48.991 Thin Provisioning: Not Supported 00:14:48.991 Per-NS Atomic Units: Yes 00:14:48.991 Atomic Boundary Size (Normal): 0 00:14:48.991 Atomic Boundary Size (PFail): 0 00:14:48.991 Atomic Boundary Offset: 0 00:14:48.991 Maximum Single Source Range Length: 65535 00:14:48.991 Maximum Copy Length: 65535 00:14:48.991 Maximum Source Range Count: 1 00:14:48.991 NGUID/EUI64 Never Reused: No 00:14:48.991 Namespace Write Protected: No 00:14:48.991 Number of LBA Formats: 1 00:14:48.991 Current LBA Format: LBA Format #00 00:14:48.991 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:48.991 00:14:48.991 18:09:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:48.991 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.250 [2024-07-24 18:09:42.078252] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.521 Initializing NVMe Controllers 00:14:54.521 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:54.521 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:54.521 Initialization complete. Launching workers. 00:14:54.521 ======================================================== 00:14:54.521 Latency(us) 00:14:54.521 Device Information : IOPS MiB/s Average min max 00:14:54.521 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39943.29 156.03 3204.36 933.78 6677.22 00:14:54.521 ======================================================== 00:14:54.521 Total : 39943.29 156.03 3204.36 933.78 6677.22 00:14:54.521 00:14:54.521 [2024-07-24 18:09:47.099146] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.521 18:09:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:54.521 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.521 [2024-07-24 18:09:47.316193] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.790 Initializing NVMe Controllers 00:14:59.790 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:59.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:59.790 Initialization complete. Launching workers. 00:14:59.790 ======================================================== 00:14:59.790 Latency(us) 00:14:59.790 Device Information : IOPS MiB/s Average min max 00:14:59.790 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7984.50 7592.29 10991.84 00:14:59.790 ======================================================== 00:14:59.790 Total : 16051.20 62.70 7984.50 7592.29 10991.84 00:14:59.790 00:14:59.790 [2024-07-24 18:09:52.357334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.790 18:09:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:59.790 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.790 [2024-07-24 18:09:52.553305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:05.058 [2024-07-24 18:09:57.626814] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:05.058 Initializing NVMe Controllers 00:15:05.058 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.058 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.058 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:05.058 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:05.058 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:05.058 Initialization complete. Launching workers. 00:15:05.058 Starting thread on core 2 00:15:05.058 Starting thread on core 3 00:15:05.058 Starting thread on core 1 00:15:05.058 18:09:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:05.058 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.058 [2024-07-24 18:09:57.909926] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.346 [2024-07-24 18:10:00.975527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.346 Initializing NVMe Controllers 00:15:08.346 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.346 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.346 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:08.346 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:08.346 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:08.346 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:08.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:08.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:08.346 Initialization complete. Launching workers. 00:15:08.346 Starting thread on core 1 with urgent priority queue 00:15:08.346 Starting thread on core 2 with urgent priority queue 00:15:08.346 Starting thread on core 3 with urgent priority queue 00:15:08.346 Starting thread on core 0 with urgent priority queue 00:15:08.346 SPDK bdev Controller (SPDK1 ) core 0: 6569.33 IO/s 15.22 secs/100000 ios 00:15:08.346 SPDK bdev Controller (SPDK1 ) core 1: 6355.67 IO/s 15.73 secs/100000 ios 00:15:08.346 SPDK bdev Controller (SPDK1 ) core 2: 8695.67 IO/s 11.50 secs/100000 ios 00:15:08.346 SPDK bdev Controller (SPDK1 ) core 3: 8005.00 IO/s 12.49 secs/100000 ios 00:15:08.346 ======================================================== 00:15:08.346 00:15:08.346 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:08.346 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.346 [2024-07-24 18:10:01.239528] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.346 Initializing NVMe Controllers 00:15:08.346 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.346 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:08.346 Namespace ID: 1 size: 0GB 00:15:08.346 Initialization complete. 00:15:08.346 INFO: using host memory buffer for IO 00:15:08.346 Hello world! 00:15:08.346 [2024-07-24 18:10:01.274758] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.346 18:10:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:08.346 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.605 [2024-07-24 18:10:01.538555] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:09.542 Initializing NVMe Controllers 00:15:09.542 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.542 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:09.542 Initialization complete. Launching workers. 00:15:09.542 submit (in ns) avg, min, max = 6760.9, 3159.0, 4002678.1 00:15:09.542 complete (in ns) avg, min, max = 20738.0, 1705.7, 4001421.0 00:15:09.542 00:15:09.542 Submit histogram 00:15:09.542 ================ 00:15:09.542 Range in us Cumulative Count 00:15:09.542 3.154 - 3.170: 0.0060% ( 1) 00:15:09.542 3.170 - 3.185: 0.0240% ( 3) 00:15:09.542 3.185 - 3.200: 0.0480% ( 4) 00:15:09.542 3.200 - 3.215: 0.0960% ( 8) 00:15:09.542 3.215 - 3.230: 0.2821% ( 31) 00:15:09.542 3.230 - 3.246: 1.5547% ( 212) 00:15:09.542 3.246 - 3.261: 4.8742% ( 553) 00:15:09.542 3.261 - 3.276: 9.8745% ( 833) 00:15:09.542 3.276 - 3.291: 16.1114% ( 1039) 00:15:09.542 3.291 - 3.307: 23.4468% ( 1222) 00:15:09.542 3.307 - 3.322: 29.9538% ( 1084) 00:15:09.542 3.322 - 3.337: 35.2842% ( 888) 00:15:09.542 3.337 - 3.352: 40.3025% ( 836) 00:15:09.542 3.352 - 3.368: 45.5970% ( 882) 00:15:09.542 3.368 - 3.383: 51.3536% ( 959) 00:15:09.542 3.383 - 3.398: 58.2028% ( 1141) 00:15:09.542 3.398 - 3.413: 64.8298% ( 1104) 00:15:09.542 3.413 - 3.429: 70.4424% ( 935) 00:15:09.542 3.429 - 3.444: 76.1270% ( 947) 00:15:09.542 3.444 - 3.459: 80.2329% ( 684) 00:15:09.542 3.459 - 3.474: 83.4084% ( 529) 00:15:09.542 3.474 - 3.490: 85.1552% ( 291) 00:15:09.542 3.490 - 3.505: 86.4158% ( 210) 00:15:09.542 3.505 - 3.520: 87.3102% ( 149) 00:15:09.542 3.520 - 3.535: 87.8624% ( 92) 00:15:09.542 3.535 - 3.550: 88.4507% ( 98) 00:15:09.542 3.550 - 3.566: 89.2611% ( 135) 00:15:09.542 3.566 - 3.581: 90.0834% ( 137) 00:15:09.542 3.581 - 3.596: 91.0379% ( 159) 00:15:09.542 3.596 - 3.611: 91.8783% ( 140) 00:15:09.542 3.611 - 3.627: 92.8327% ( 159) 00:15:09.542 3.627 - 3.642: 93.7331% ( 150) 00:15:09.542 3.642 - 3.657: 94.8496% ( 186) 00:15:09.542 3.657 - 3.672: 95.7681% ( 153) 00:15:09.542 3.672 - 3.688: 96.5304% ( 127) 00:15:09.542 3.688 - 3.703: 97.2928% ( 127) 00:15:09.542 3.703 - 3.718: 97.9170% ( 104) 00:15:09.542 3.718 - 3.733: 98.3853% ( 78) 00:15:09.542 3.733 - 3.749: 98.7034% ( 53) 00:15:09.542 3.749 - 3.764: 98.9915% ( 48) 00:15:09.542 3.764 - 3.779: 99.2376% ( 41) 00:15:09.542 3.779 - 3.794: 99.4477% ( 35) 00:15:09.542 3.794 - 3.810: 99.5438% ( 16) 00:15:09.542 3.810 - 3.825: 99.6038% ( 10) 00:15:09.542 3.825 - 3.840: 99.6458% ( 7) 00:15:09.542 3.840 - 3.855: 99.6759% ( 5) 00:15:09.542 3.855 - 3.870: 99.6999% ( 4) 00:15:09.542 3.870 - 3.886: 99.7119% ( 2) 00:15:09.542 3.886 - 3.901: 99.7179% ( 1) 00:15:09.542 4.328 - 4.358: 99.7239% ( 1) 00:15:09.542 5.150 - 5.181: 99.7299% ( 1) 00:15:09.542 5.394 - 5.425: 99.7359% ( 1) 00:15:09.542 5.486 - 5.516: 99.7479% ( 2) 00:15:09.542 5.516 - 5.547: 99.7539% ( 1) 00:15:09.542 5.608 - 5.638: 99.7599% ( 1) 00:15:09.542 5.790 - 5.821: 99.7659% ( 1) 00:15:09.542 5.821 - 5.851: 99.7719% ( 1) 00:15:09.542 5.882 - 5.912: 99.7779% ( 1) 00:15:09.542 5.912 - 5.943: 99.7839% ( 1) 00:15:09.542 5.973 - 6.004: 99.7899% ( 1) 00:15:09.542 6.217 - 6.248: 99.7959% ( 1) 00:15:09.542 6.309 - 6.339: 99.8019% ( 1) 00:15:09.542 6.552 - 6.583: 99.8079% ( 1) 00:15:09.542 6.735 - 6.766: 99.8139% ( 1) 00:15:09.542 6.979 - 7.010: 99.8319% ( 3) 00:15:09.542 7.040 - 7.070: 99.8379% ( 1) 00:15:09.542 7.162 - 7.192: 99.8439% ( 1) 00:15:09.542 7.253 - 7.284: 99.8559% ( 2) 00:15:09.542 7.467 - 7.497: 99.8619% ( 1) 00:15:09.542 7.497 - 7.528: 99.8679% ( 1) 00:15:09.542 7.528 - 7.558: 99.8739% ( 1) 00:15:09.542 7.619 - 7.650: 99.8799% ( 1) 00:15:09.542 7.650 - 7.680: 99.8859% ( 1) 00:15:09.542 8.290 - 8.350: 99.8920% ( 1) 00:15:09.542 8.350 - 8.411: 99.8980% ( 1) 00:15:09.542 8.411 - 8.472: 99.9040% ( 1) 00:15:09.542 13.531 - 13.592: 99.9100% ( 1) 00:15:09.542 19.505 - 19.627: 99.9160% ( 1) 00:15:09.542 3994.575 - 4025.783: 100.0000% ( 14) 00:15:09.542 00:15:09.542 [2024-07-24 18:10:02.558382] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:09.542 Complete histogram 00:15:09.542 ================== 00:15:09.542 Range in us Cumulative Count 00:15:09.542 1.699 - 1.707: 0.0060% ( 1) 00:15:09.542 1.707 - 1.714: 0.0600% ( 9) 00:15:09.542 1.714 - 1.722: 0.2221% ( 27) 00:15:09.543 1.722 - 1.730: 0.3482% ( 21) 00:15:09.543 1.730 - 1.737: 0.4442% ( 16) 00:15:09.543 1.737 - 1.745: 0.4802% ( 6) 00:15:09.543 1.745 - 1.752: 0.5402% ( 10) 00:15:09.543 1.752 - 1.760: 2.5092% ( 328) 00:15:09.543 1.760 - 1.768: 16.5976% ( 2347) 00:15:09.543 1.768 - 1.775: 48.0401% ( 5238) 00:15:09.543 1.775 - 1.783: 72.6934% ( 4107) 00:15:09.543 1.783 - 1.790: 80.7672% ( 1345) 00:15:09.543 1.790 - 1.798: 83.0542% ( 381) 00:15:09.543 1.798 - 1.806: 84.6149% ( 260) 00:15:09.543 1.806 - 1.813: 85.4313% ( 136) 00:15:09.543 1.813 - 1.821: 86.7399% ( 218) 00:15:09.543 1.821 - 1.829: 89.9454% ( 534) 00:15:09.543 1.829 - 1.836: 93.6011% ( 609) 00:15:09.543 1.836 - 1.844: 96.0502% ( 408) 00:15:09.543 1.844 - 1.851: 97.3948% ( 224) 00:15:09.543 1.851 - 1.859: 98.2952% ( 150) 00:15:09.543 1.859 - 1.867: 98.7934% ( 83) 00:15:09.543 1.867 - 1.874: 99.0276% ( 39) 00:15:09.543 1.874 - 1.882: 99.0936% ( 11) 00:15:09.543 1.882 - 1.890: 99.1176% ( 4) 00:15:09.543 1.890 - 1.897: 99.1536% ( 6) 00:15:09.543 1.897 - 1.905: 99.1896% ( 6) 00:15:09.543 1.905 - 1.912: 99.2316% ( 7) 00:15:09.543 1.912 - 1.920: 99.2437% ( 2) 00:15:09.543 1.920 - 1.928: 99.2497% ( 1) 00:15:09.543 1.928 - 1.935: 99.2677% ( 3) 00:15:09.543 1.935 - 1.943: 99.2797% ( 2) 00:15:09.543 1.943 - 1.950: 99.2857% ( 1) 00:15:09.543 1.950 - 1.966: 99.2917% ( 1) 00:15:09.543 1.966 - 1.981: 99.3037% ( 2) 00:15:09.543 1.981 - 1.996: 99.3097% ( 1) 00:15:09.543 2.103 - 2.118: 99.3157% ( 1) 00:15:09.543 2.164 - 2.179: 99.3217% ( 1) 00:15:09.543 2.179 - 2.194: 99.3277% ( 1) 00:15:09.543 2.194 - 2.210: 99.3337% ( 1) 00:15:09.543 2.270 - 2.286: 99.3397% ( 1) 00:15:09.543 2.408 - 2.423: 99.3457% ( 1) 00:15:09.543 3.246 - 3.261: 99.3517% ( 1) 00:15:09.543 3.794 - 3.810: 99.3577% ( 1) 00:15:09.543 3.886 - 3.901: 99.3637% ( 1) 00:15:09.543 3.992 - 4.023: 99.3697% ( 1) 00:15:09.543 4.693 - 4.724: 99.3757% ( 1) 00:15:09.543 4.876 - 4.907: 99.3817% ( 1) 00:15:09.543 4.907 - 4.937: 99.3877% ( 1) 00:15:09.543 5.090 - 5.120: 99.3937% ( 1) 00:15:09.543 5.211 - 5.242: 99.3997% ( 1) 00:15:09.543 5.394 - 5.425: 99.4057% ( 1) 00:15:09.543 5.486 - 5.516: 99.4117% ( 1) 00:15:09.543 5.547 - 5.577: 99.4237% ( 2) 00:15:09.543 5.608 - 5.638: 99.4297% ( 1) 00:15:09.543 5.790 - 5.821: 99.4357% ( 1) 00:15:09.543 6.034 - 6.065: 99.4477% ( 2) 00:15:09.543 6.370 - 6.400: 99.4537% ( 1) 00:15:09.543 6.461 - 6.491: 99.4598% ( 1) 00:15:09.543 6.491 - 6.522: 99.4658% ( 1) 00:15:09.543 6.857 - 6.888: 99.4718% ( 1) 00:15:09.543 7.131 - 7.162: 99.4778% ( 1) 00:15:09.543 7.162 - 7.192: 99.4838% ( 1) 00:15:09.543 7.406 - 7.436: 99.4898% ( 1) 00:15:09.543 8.168 - 8.229: 99.4958% ( 1) 00:15:09.543 8.594 - 8.655: 99.5018% ( 1) 00:15:09.543 11.825 - 11.886: 99.5078% ( 1) 00:15:09.543 12.130 - 12.190: 99.5138% ( 1) 00:15:09.543 34.377 - 34.621: 99.5198% ( 1) 00:15:09.543 203.825 - 204.800: 99.5258% ( 1) 00:15:09.543 3838.537 - 3854.141: 99.5318% ( 1) 00:15:09.543 3947.764 - 3963.368: 99.5378% ( 1) 00:15:09.543 3994.575 - 4025.783: 100.0000% ( 77) 00:15:09.543 00:15:09.543 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:09.543 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:09.543 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:09.543 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:09.543 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:09.801 [ 00:15:09.801 { 00:15:09.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:09.801 "subtype": "Discovery", 00:15:09.801 "listen_addresses": [], 00:15:09.801 "allow_any_host": true, 00:15:09.801 "hosts": [] 00:15:09.801 }, 00:15:09.801 { 00:15:09.801 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:09.801 "subtype": "NVMe", 00:15:09.801 "listen_addresses": [ 00:15:09.801 { 00:15:09.801 "trtype": "VFIOUSER", 00:15:09.801 "adrfam": "IPv4", 00:15:09.801 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:09.801 "trsvcid": "0" 00:15:09.801 } 00:15:09.801 ], 00:15:09.801 "allow_any_host": true, 00:15:09.801 "hosts": [], 00:15:09.801 "serial_number": "SPDK1", 00:15:09.801 "model_number": "SPDK bdev Controller", 00:15:09.801 "max_namespaces": 32, 00:15:09.801 "min_cntlid": 1, 00:15:09.801 "max_cntlid": 65519, 00:15:09.801 "namespaces": [ 00:15:09.801 { 00:15:09.801 "nsid": 1, 00:15:09.801 "bdev_name": "Malloc1", 00:15:09.801 "name": "Malloc1", 00:15:09.801 "nguid": "0DD4689A07CE46F4B1875348BF43883B", 00:15:09.801 "uuid": "0dd4689a-07ce-46f4-b187-5348bf43883b" 00:15:09.801 } 00:15:09.801 ] 00:15:09.801 }, 00:15:09.801 { 00:15:09.801 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:09.801 "subtype": "NVMe", 00:15:09.801 "listen_addresses": [ 00:15:09.801 { 00:15:09.801 "trtype": "VFIOUSER", 00:15:09.801 "adrfam": "IPv4", 00:15:09.801 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:09.801 "trsvcid": "0" 00:15:09.801 } 00:15:09.801 ], 00:15:09.801 "allow_any_host": true, 00:15:09.801 "hosts": [], 00:15:09.801 "serial_number": "SPDK2", 00:15:09.801 "model_number": "SPDK bdev Controller", 00:15:09.801 "max_namespaces": 32, 00:15:09.801 "min_cntlid": 1, 00:15:09.801 "max_cntlid": 65519, 00:15:09.801 "namespaces": [ 00:15:09.801 { 00:15:09.801 "nsid": 1, 00:15:09.801 "bdev_name": "Malloc2", 00:15:09.801 "name": "Malloc2", 00:15:09.801 "nguid": "1EC5F1FAFF924C538729FF77E1F45325", 00:15:09.801 "uuid": "1ec5f1fa-ff92-4c53-8729-ff77e1f45325" 00:15:09.801 } 00:15:09.801 ] 00:15:09.801 } 00:15:09.801 ] 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3386859 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:09.801 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:09.801 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.066 [2024-07-24 18:10:02.923961] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.066 Malloc3 00:15:10.066 18:10:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:10.066 [2024-07-24 18:10:03.133536] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.326 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:10.326 Asynchronous Event Request test 00:15:10.326 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.326 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.326 Registering asynchronous event callbacks... 00:15:10.326 Starting namespace attribute notice tests for all controllers... 00:15:10.326 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:10.326 aer_cb - Changed Namespace 00:15:10.326 Cleaning up... 00:15:10.326 [ 00:15:10.326 { 00:15:10.326 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:10.326 "subtype": "Discovery", 00:15:10.326 "listen_addresses": [], 00:15:10.326 "allow_any_host": true, 00:15:10.326 "hosts": [] 00:15:10.326 }, 00:15:10.326 { 00:15:10.326 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:10.326 "subtype": "NVMe", 00:15:10.326 "listen_addresses": [ 00:15:10.326 { 00:15:10.326 "trtype": "VFIOUSER", 00:15:10.326 "adrfam": "IPv4", 00:15:10.326 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:10.326 "trsvcid": "0" 00:15:10.326 } 00:15:10.326 ], 00:15:10.327 "allow_any_host": true, 00:15:10.327 "hosts": [], 00:15:10.327 "serial_number": "SPDK1", 00:15:10.327 "model_number": "SPDK bdev Controller", 00:15:10.327 "max_namespaces": 32, 00:15:10.327 "min_cntlid": 1, 00:15:10.327 "max_cntlid": 65519, 00:15:10.327 "namespaces": [ 00:15:10.327 { 00:15:10.327 "nsid": 1, 00:15:10.327 "bdev_name": "Malloc1", 00:15:10.327 "name": "Malloc1", 00:15:10.327 "nguid": "0DD4689A07CE46F4B1875348BF43883B", 00:15:10.327 "uuid": "0dd4689a-07ce-46f4-b187-5348bf43883b" 00:15:10.327 }, 00:15:10.327 { 00:15:10.327 "nsid": 2, 00:15:10.327 "bdev_name": "Malloc3", 00:15:10.327 "name": "Malloc3", 00:15:10.327 "nguid": "87366851DDB045C39F7478B057399A7B", 00:15:10.327 "uuid": "87366851-ddb0-45c3-9f74-78b057399a7b" 00:15:10.327 } 00:15:10.327 ] 00:15:10.327 }, 00:15:10.327 { 00:15:10.327 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:10.327 "subtype": "NVMe", 00:15:10.327 "listen_addresses": [ 00:15:10.327 { 00:15:10.327 "trtype": "VFIOUSER", 00:15:10.327 "adrfam": "IPv4", 00:15:10.327 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:10.327 "trsvcid": "0" 00:15:10.327 } 00:15:10.327 ], 00:15:10.327 "allow_any_host": true, 00:15:10.327 "hosts": [], 00:15:10.327 "serial_number": "SPDK2", 00:15:10.327 "model_number": "SPDK bdev Controller", 00:15:10.327 "max_namespaces": 32, 00:15:10.327 "min_cntlid": 1, 00:15:10.327 "max_cntlid": 65519, 00:15:10.327 "namespaces": [ 00:15:10.327 { 00:15:10.327 "nsid": 1, 00:15:10.327 "bdev_name": "Malloc2", 00:15:10.327 "name": "Malloc2", 00:15:10.327 "nguid": "1EC5F1FAFF924C538729FF77E1F45325", 00:15:10.327 "uuid": "1ec5f1fa-ff92-4c53-8729-ff77e1f45325" 00:15:10.327 } 00:15:10.327 ] 00:15:10.327 } 00:15:10.327 ] 00:15:10.327 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3386859 00:15:10.327 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:10.327 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:10.327 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:10.327 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:10.327 [2024-07-24 18:10:03.357766] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:15:10.327 [2024-07-24 18:10:03.357816] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3386875 ] 00:15:10.327 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.327 [2024-07-24 18:10:03.386668] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:10.327 [2024-07-24 18:10:03.396763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:10.327 [2024-07-24 18:10:03.396786] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f51ab3ae000 00:15:10.327 [2024-07-24 18:10:03.397763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.398768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.399775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.400777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.401788] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.402796] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.403807] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.404810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:10.327 [2024-07-24 18:10:03.405824] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:10.327 [2024-07-24 18:10:03.405833] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f51ab3a3000 00:15:10.327 [2024-07-24 18:10:03.406747] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:10.587 [2024-07-24 18:10:03.419109] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:10.587 [2024-07-24 18:10:03.419131] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:10.587 [2024-07-24 18:10:03.421198] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:10.587 [2024-07-24 18:10:03.421236] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:10.587 [2024-07-24 18:10:03.421305] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:10.587 [2024-07-24 18:10:03.421319] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:10.587 [2024-07-24 18:10:03.421323] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:10.587 [2024-07-24 18:10:03.422203] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:10.587 [2024-07-24 18:10:03.422213] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:10.587 [2024-07-24 18:10:03.422220] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:10.587 [2024-07-24 18:10:03.423214] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:10.587 [2024-07-24 18:10:03.423223] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:10.587 [2024-07-24 18:10:03.423229] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:10.587 [2024-07-24 18:10:03.424220] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:10.587 [2024-07-24 18:10:03.424231] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:10.587 [2024-07-24 18:10:03.425229] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:10.587 [2024-07-24 18:10:03.425238] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:10.587 [2024-07-24 18:10:03.425242] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:10.587 [2024-07-24 18:10:03.425248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:10.587 [2024-07-24 18:10:03.425353] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:10.587 [2024-07-24 18:10:03.425357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:10.587 [2024-07-24 18:10:03.425361] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:10.587 [2024-07-24 18:10:03.426237] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:10.587 [2024-07-24 18:10:03.427245] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:10.587 [2024-07-24 18:10:03.428252] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:10.587 [2024-07-24 18:10:03.429258] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.587 [2024-07-24 18:10:03.429295] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:10.587 [2024-07-24 18:10:03.430274] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:10.587 [2024-07-24 18:10:03.430283] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:10.587 [2024-07-24 18:10:03.430287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.430303] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:10.587 [2024-07-24 18:10:03.430310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.430321] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:10.587 [2024-07-24 18:10:03.430325] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.587 [2024-07-24 18:10:03.430328] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.587 [2024-07-24 18:10:03.430340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.587 [2024-07-24 18:10:03.436498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:10.587 [2024-07-24 18:10:03.436509] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:10.587 [2024-07-24 18:10:03.436513] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:10.587 [2024-07-24 18:10:03.436519] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:10.587 [2024-07-24 18:10:03.436523] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:10.587 [2024-07-24 18:10:03.436527] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:10.587 [2024-07-24 18:10:03.436531] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:10.587 [2024-07-24 18:10:03.436535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.436542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.436553] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:10.587 [2024-07-24 18:10:03.444496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:10.587 [2024-07-24 18:10:03.444509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.587 [2024-07-24 18:10:03.444517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.587 [2024-07-24 18:10:03.444524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.587 [2024-07-24 18:10:03.444531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:10.587 [2024-07-24 18:10:03.444535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.444542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.444550] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:10.587 [2024-07-24 18:10:03.452495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:10.587 [2024-07-24 18:10:03.452503] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:10.587 [2024-07-24 18:10:03.452507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.452514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.452520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.452527] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:10.587 [2024-07-24 18:10:03.464498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:10.587 [2024-07-24 18:10:03.464553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.464561] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:10.587 [2024-07-24 18:10:03.464568] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:10.588 [2024-07-24 18:10:03.464574] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:10.588 [2024-07-24 18:10:03.464577] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.588 [2024-07-24 18:10:03.464583] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.476498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.476509] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:10.588 [2024-07-24 18:10:03.476519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.476525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.476532] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:10.588 [2024-07-24 18:10:03.476536] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.588 [2024-07-24 18:10:03.476539] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.588 [2024-07-24 18:10:03.476544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.484497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.484510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.484517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.484524] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:10.588 [2024-07-24 18:10:03.484527] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.588 [2024-07-24 18:10:03.484530] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.588 [2024-07-24 18:10:03.484536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.492498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.492507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.492512] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.492519] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.492526] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.492531] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.492535] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.492540] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:10.588 [2024-07-24 18:10:03.492545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:10.588 [2024-07-24 18:10:03.492550] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:10.588 [2024-07-24 18:10:03.492565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.500497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.500511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.512298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.512311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.523499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.523512] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.531499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.531515] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:10.588 [2024-07-24 18:10:03.531519] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:10.588 [2024-07-24 18:10:03.531522] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:10.588 [2024-07-24 18:10:03.531525] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:10.588 [2024-07-24 18:10:03.531528] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:10.588 [2024-07-24 18:10:03.531534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:10.588 [2024-07-24 18:10:03.531540] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:10.588 [2024-07-24 18:10:03.531544] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:10.588 [2024-07-24 18:10:03.531547] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.588 [2024-07-24 18:10:03.531552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.531558] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:10.588 [2024-07-24 18:10:03.531561] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:10.588 [2024-07-24 18:10:03.531564] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.588 [2024-07-24 18:10:03.531569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.531575] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:10.588 [2024-07-24 18:10:03.531579] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:10.588 [2024-07-24 18:10:03.531582] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:10.588 [2024-07-24 18:10:03.531587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:10.588 [2024-07-24 18:10:03.539497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.539513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.539522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:10.588 [2024-07-24 18:10:03.539528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:10.588 ===================================================== 00:15:10.588 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:10.588 ===================================================== 00:15:10.588 Controller Capabilities/Features 00:15:10.588 ================================ 00:15:10.588 Vendor ID: 4e58 00:15:10.588 Subsystem Vendor ID: 4e58 00:15:10.588 Serial Number: SPDK2 00:15:10.588 Model Number: SPDK bdev Controller 00:15:10.588 Firmware Version: 24.09 00:15:10.588 Recommended Arb Burst: 6 00:15:10.588 IEEE OUI Identifier: 8d 6b 50 00:15:10.588 Multi-path I/O 00:15:10.588 May have multiple subsystem ports: Yes 00:15:10.588 May have multiple controllers: Yes 00:15:10.588 Associated with SR-IOV VF: No 00:15:10.588 Max Data Transfer Size: 131072 00:15:10.588 Max Number of Namespaces: 32 00:15:10.588 Max Number of I/O Queues: 127 00:15:10.588 NVMe Specification Version (VS): 1.3 00:15:10.588 NVMe Specification Version (Identify): 1.3 00:15:10.588 Maximum Queue Entries: 256 00:15:10.588 Contiguous Queues Required: Yes 00:15:10.588 Arbitration Mechanisms Supported 00:15:10.588 Weighted Round Robin: Not Supported 00:15:10.588 Vendor Specific: Not Supported 00:15:10.588 Reset Timeout: 15000 ms 00:15:10.588 Doorbell Stride: 4 bytes 00:15:10.588 NVM Subsystem Reset: Not Supported 00:15:10.588 Command Sets Supported 00:15:10.588 NVM Command Set: Supported 00:15:10.588 Boot Partition: Not Supported 00:15:10.588 Memory Page Size Minimum: 4096 bytes 00:15:10.588 Memory Page Size Maximum: 4096 bytes 00:15:10.588 Persistent Memory Region: Not Supported 00:15:10.588 Optional Asynchronous Events Supported 00:15:10.588 Namespace Attribute Notices: Supported 00:15:10.588 Firmware Activation Notices: Not Supported 00:15:10.588 ANA Change Notices: Not Supported 00:15:10.588 PLE Aggregate Log Change Notices: Not Supported 00:15:10.588 LBA Status Info Alert Notices: Not Supported 00:15:10.588 EGE Aggregate Log Change Notices: Not Supported 00:15:10.588 Normal NVM Subsystem Shutdown event: Not Supported 00:15:10.588 Zone Descriptor Change Notices: Not Supported 00:15:10.588 Discovery Log Change Notices: Not Supported 00:15:10.588 Controller Attributes 00:15:10.588 128-bit Host Identifier: Supported 00:15:10.588 Non-Operational Permissive Mode: Not Supported 00:15:10.588 NVM Sets: Not Supported 00:15:10.588 Read Recovery Levels: Not Supported 00:15:10.588 Endurance Groups: Not Supported 00:15:10.588 Predictable Latency Mode: Not Supported 00:15:10.588 Traffic Based Keep ALive: Not Supported 00:15:10.588 Namespace Granularity: Not Supported 00:15:10.588 SQ Associations: Not Supported 00:15:10.588 UUID List: Not Supported 00:15:10.588 Multi-Domain Subsystem: Not Supported 00:15:10.588 Fixed Capacity Management: Not Supported 00:15:10.588 Variable Capacity Management: Not Supported 00:15:10.588 Delete Endurance Group: Not Supported 00:15:10.588 Delete NVM Set: Not Supported 00:15:10.588 Extended LBA Formats Supported: Not Supported 00:15:10.588 Flexible Data Placement Supported: Not Supported 00:15:10.588 00:15:10.588 Controller Memory Buffer Support 00:15:10.588 ================================ 00:15:10.588 Supported: No 00:15:10.588 00:15:10.588 Persistent Memory Region Support 00:15:10.588 ================================ 00:15:10.588 Supported: No 00:15:10.588 00:15:10.588 Admin Command Set Attributes 00:15:10.588 ============================ 00:15:10.588 Security Send/Receive: Not Supported 00:15:10.588 Format NVM: Not Supported 00:15:10.588 Firmware Activate/Download: Not Supported 00:15:10.588 Namespace Management: Not Supported 00:15:10.588 Device Self-Test: Not Supported 00:15:10.588 Directives: Not Supported 00:15:10.588 NVMe-MI: Not Supported 00:15:10.588 Virtualization Management: Not Supported 00:15:10.588 Doorbell Buffer Config: Not Supported 00:15:10.588 Get LBA Status Capability: Not Supported 00:15:10.588 Command & Feature Lockdown Capability: Not Supported 00:15:10.588 Abort Command Limit: 4 00:15:10.588 Async Event Request Limit: 4 00:15:10.588 Number of Firmware Slots: N/A 00:15:10.588 Firmware Slot 1 Read-Only: N/A 00:15:10.588 Firmware Activation Without Reset: N/A 00:15:10.588 Multiple Update Detection Support: N/A 00:15:10.588 Firmware Update Granularity: No Information Provided 00:15:10.588 Per-Namespace SMART Log: No 00:15:10.588 Asymmetric Namespace Access Log Page: Not Supported 00:15:10.588 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:10.588 Command Effects Log Page: Supported 00:15:10.588 Get Log Page Extended Data: Supported 00:15:10.588 Telemetry Log Pages: Not Supported 00:15:10.588 Persistent Event Log Pages: Not Supported 00:15:10.588 Supported Log Pages Log Page: May Support 00:15:10.588 Commands Supported & Effects Log Page: Not Supported 00:15:10.588 Feature Identifiers & Effects Log Page:May Support 00:15:10.588 NVMe-MI Commands & Effects Log Page: May Support 00:15:10.588 Data Area 4 for Telemetry Log: Not Supported 00:15:10.588 Error Log Page Entries Supported: 128 00:15:10.588 Keep Alive: Supported 00:15:10.588 Keep Alive Granularity: 10000 ms 00:15:10.588 00:15:10.588 NVM Command Set Attributes 00:15:10.589 ========================== 00:15:10.589 Submission Queue Entry Size 00:15:10.589 Max: 64 00:15:10.589 Min: 64 00:15:10.589 Completion Queue Entry Size 00:15:10.589 Max: 16 00:15:10.589 Min: 16 00:15:10.589 Number of Namespaces: 32 00:15:10.589 Compare Command: Supported 00:15:10.589 Write Uncorrectable Command: Not Supported 00:15:10.589 Dataset Management Command: Supported 00:15:10.589 Write Zeroes Command: Supported 00:15:10.589 Set Features Save Field: Not Supported 00:15:10.589 Reservations: Not Supported 00:15:10.589 Timestamp: Not Supported 00:15:10.589 Copy: Supported 00:15:10.589 Volatile Write Cache: Present 00:15:10.589 Atomic Write Unit (Normal): 1 00:15:10.589 Atomic Write Unit (PFail): 1 00:15:10.589 Atomic Compare & Write Unit: 1 00:15:10.589 Fused Compare & Write: Supported 00:15:10.589 Scatter-Gather List 00:15:10.589 SGL Command Set: Supported (Dword aligned) 00:15:10.589 SGL Keyed: Not Supported 00:15:10.589 SGL Bit Bucket Descriptor: Not Supported 00:15:10.589 SGL Metadata Pointer: Not Supported 00:15:10.589 Oversized SGL: Not Supported 00:15:10.589 SGL Metadata Address: Not Supported 00:15:10.589 SGL Offset: Not Supported 00:15:10.589 Transport SGL Data Block: Not Supported 00:15:10.589 Replay Protected Memory Block: Not Supported 00:15:10.589 00:15:10.589 Firmware Slot Information 00:15:10.589 ========================= 00:15:10.589 Active slot: 1 00:15:10.589 Slot 1 Firmware Revision: 24.09 00:15:10.589 00:15:10.589 00:15:10.589 Commands Supported and Effects 00:15:10.589 ============================== 00:15:10.589 Admin Commands 00:15:10.589 -------------- 00:15:10.589 Get Log Page (02h): Supported 00:15:10.589 Identify (06h): Supported 00:15:10.589 Abort (08h): Supported 00:15:10.589 Set Features (09h): Supported 00:15:10.589 Get Features (0Ah): Supported 00:15:10.589 Asynchronous Event Request (0Ch): Supported 00:15:10.589 Keep Alive (18h): Supported 00:15:10.589 I/O Commands 00:15:10.589 ------------ 00:15:10.589 Flush (00h): Supported LBA-Change 00:15:10.589 Write (01h): Supported LBA-Change 00:15:10.589 Read (02h): Supported 00:15:10.589 Compare (05h): Supported 00:15:10.589 Write Zeroes (08h): Supported LBA-Change 00:15:10.589 Dataset Management (09h): Supported LBA-Change 00:15:10.589 Copy (19h): Supported LBA-Change 00:15:10.589 00:15:10.589 Error Log 00:15:10.589 ========= 00:15:10.589 00:15:10.589 Arbitration 00:15:10.589 =========== 00:15:10.589 Arbitration Burst: 1 00:15:10.589 00:15:10.589 Power Management 00:15:10.589 ================ 00:15:10.589 Number of Power States: 1 00:15:10.589 Current Power State: Power State #0 00:15:10.589 Power State #0: 00:15:10.589 Max Power: 0.00 W 00:15:10.589 Non-Operational State: Operational 00:15:10.589 Entry Latency: Not Reported 00:15:10.589 Exit Latency: Not Reported 00:15:10.589 Relative Read Throughput: 0 00:15:10.589 Relative Read Latency: 0 00:15:10.589 Relative Write Throughput: 0 00:15:10.589 Relative Write Latency: 0 00:15:10.589 Idle Power: Not Reported 00:15:10.589 Active Power: Not Reported 00:15:10.589 Non-Operational Permissive Mode: Not Supported 00:15:10.589 00:15:10.589 Health Information 00:15:10.589 ================== 00:15:10.589 Critical Warnings: 00:15:10.589 Available Spare Space: OK 00:15:10.589 Temperature: OK 00:15:10.589 Device Reliability: OK 00:15:10.589 Read Only: No 00:15:10.589 Volatile Memory Backup: OK 00:15:10.589 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:10.589 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:10.589 Available Spare: 0% 00:15:10.589 Available Sp[2024-07-24 18:10:03.539613] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:10.589 [2024-07-24 18:10:03.547498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:10.589 [2024-07-24 18:10:03.547527] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:10.589 [2024-07-24 18:10:03.547535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.589 [2024-07-24 18:10:03.547540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.589 [2024-07-24 18:10:03.547546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.589 [2024-07-24 18:10:03.547551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:10.589 [2024-07-24 18:10:03.547604] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:10.589 [2024-07-24 18:10:03.547614] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:10.589 [2024-07-24 18:10:03.548608] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.589 [2024-07-24 18:10:03.548651] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:10.589 [2024-07-24 18:10:03.548657] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:10.589 [2024-07-24 18:10:03.549610] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:10.589 [2024-07-24 18:10:03.549621] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:10.589 [2024-07-24 18:10:03.549666] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:10.589 [2024-07-24 18:10:03.552500] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:10.589 are Threshold: 0% 00:15:10.589 Life Percentage Used: 0% 00:15:10.589 Data Units Read: 0 00:15:10.589 Data Units Written: 0 00:15:10.589 Host Read Commands: 0 00:15:10.589 Host Write Commands: 0 00:15:10.589 Controller Busy Time: 0 minutes 00:15:10.589 Power Cycles: 0 00:15:10.589 Power On Hours: 0 hours 00:15:10.589 Unsafe Shutdowns: 0 00:15:10.589 Unrecoverable Media Errors: 0 00:15:10.589 Lifetime Error Log Entries: 0 00:15:10.589 Warning Temperature Time: 0 minutes 00:15:10.589 Critical Temperature Time: 0 minutes 00:15:10.589 00:15:10.589 Number of Queues 00:15:10.589 ================ 00:15:10.589 Number of I/O Submission Queues: 127 00:15:10.589 Number of I/O Completion Queues: 127 00:15:10.589 00:15:10.589 Active Namespaces 00:15:10.589 ================= 00:15:10.589 Namespace ID:1 00:15:10.589 Error Recovery Timeout: Unlimited 00:15:10.589 Command Set Identifier: NVM (00h) 00:15:10.589 Deallocate: Supported 00:15:10.589 Deallocated/Unwritten Error: Not Supported 00:15:10.589 Deallocated Read Value: Unknown 00:15:10.589 Deallocate in Write Zeroes: Not Supported 00:15:10.589 Deallocated Guard Field: 0xFFFF 00:15:10.589 Flush: Supported 00:15:10.589 Reservation: Supported 00:15:10.589 Namespace Sharing Capabilities: Multiple Controllers 00:15:10.589 Size (in LBAs): 131072 (0GiB) 00:15:10.589 Capacity (in LBAs): 131072 (0GiB) 00:15:10.589 Utilization (in LBAs): 131072 (0GiB) 00:15:10.589 NGUID: 1EC5F1FAFF924C538729FF77E1F45325 00:15:10.589 UUID: 1ec5f1fa-ff92-4c53-8729-ff77e1f45325 00:15:10.589 Thin Provisioning: Not Supported 00:15:10.589 Per-NS Atomic Units: Yes 00:15:10.589 Atomic Boundary Size (Normal): 0 00:15:10.589 Atomic Boundary Size (PFail): 0 00:15:10.589 Atomic Boundary Offset: 0 00:15:10.589 Maximum Single Source Range Length: 65535 00:15:10.589 Maximum Copy Length: 65535 00:15:10.589 Maximum Source Range Count: 1 00:15:10.589 NGUID/EUI64 Never Reused: No 00:15:10.589 Namespace Write Protected: No 00:15:10.589 Number of LBA Formats: 1 00:15:10.589 Current LBA Format: LBA Format #00 00:15:10.589 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:10.589 00:15:10.589 18:10:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:10.589 EAL: No free 2048 kB hugepages reported on node 1 00:15:10.847 [2024-07-24 18:10:03.772707] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.114 Initializing NVMe Controllers 00:15:16.114 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:16.114 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:16.114 Initialization complete. Launching workers. 00:15:16.114 ======================================================== 00:15:16.114 Latency(us) 00:15:16.114 Device Information : IOPS MiB/s Average min max 00:15:16.114 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39955.47 156.08 3203.39 954.50 7612.65 00:15:16.114 ======================================================== 00:15:16.114 Total : 39955.47 156.08 3203.39 954.50 7612.65 00:15:16.114 00:15:16.114 [2024-07-24 18:10:08.876736] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.114 18:10:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:16.114 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.114 [2024-07-24 18:10:09.091433] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.418 Initializing NVMe Controllers 00:15:21.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:21.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:21.418 Initialization complete. Launching workers. 00:15:21.418 ======================================================== 00:15:21.418 Latency(us) 00:15:21.418 Device Information : IOPS MiB/s Average min max 00:15:21.418 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39910.27 155.90 3207.02 951.32 9617.79 00:15:21.418 ======================================================== 00:15:21.418 Total : 39910.27 155.90 3207.02 951.32 9617.79 00:15:21.418 00:15:21.418 [2024-07-24 18:10:14.110334] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.418 18:10:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:21.418 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.418 [2024-07-24 18:10:14.301521] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:26.691 [2024-07-24 18:10:19.432591] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:26.691 Initializing NVMe Controllers 00:15:26.691 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.691 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:26.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:26.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:26.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:26.691 Initialization complete. Launching workers. 00:15:26.691 Starting thread on core 2 00:15:26.691 Starting thread on core 3 00:15:26.691 Starting thread on core 1 00:15:26.691 18:10:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:26.691 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.691 [2024-07-24 18:10:19.712903] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:29.978 [2024-07-24 18:10:22.777193] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:29.978 Initializing NVMe Controllers 00:15:29.978 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.978 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:29.978 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:29.978 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:29.978 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:29.978 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:29.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:29.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:29.978 Initialization complete. Launching workers. 00:15:29.978 Starting thread on core 1 with urgent priority queue 00:15:29.978 Starting thread on core 2 with urgent priority queue 00:15:29.978 Starting thread on core 3 with urgent priority queue 00:15:29.978 Starting thread on core 0 with urgent priority queue 00:15:29.978 SPDK bdev Controller (SPDK2 ) core 0: 9415.00 IO/s 10.62 secs/100000 ios 00:15:29.978 SPDK bdev Controller (SPDK2 ) core 1: 8159.67 IO/s 12.26 secs/100000 ios 00:15:29.978 SPDK bdev Controller (SPDK2 ) core 2: 9312.00 IO/s 10.74 secs/100000 ios 00:15:29.978 SPDK bdev Controller (SPDK2 ) core 3: 7567.33 IO/s 13.21 secs/100000 ios 00:15:29.978 ======================================================== 00:15:29.978 00:15:29.978 18:10:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:29.978 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.978 [2024-07-24 18:10:23.053881] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.237 Initializing NVMe Controllers 00:15:30.237 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.237 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:30.237 Namespace ID: 1 size: 0GB 00:15:30.237 Initialization complete. 00:15:30.237 INFO: using host memory buffer for IO 00:15:30.237 Hello world! 00:15:30.237 [2024-07-24 18:10:23.063940] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.237 18:10:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:30.237 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.497 [2024-07-24 18:10:23.331220] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.434 Initializing NVMe Controllers 00:15:31.434 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.434 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.434 Initialization complete. Launching workers. 00:15:31.434 submit (in ns) avg, min, max = 6213.0, 3154.3, 4000097.1 00:15:31.434 complete (in ns) avg, min, max = 21640.8, 1705.7, 5994103.8 00:15:31.434 00:15:31.434 Submit histogram 00:15:31.434 ================ 00:15:31.434 Range in us Cumulative Count 00:15:31.434 3.154 - 3.170: 0.0584% ( 10) 00:15:31.434 3.170 - 3.185: 0.1811% ( 21) 00:15:31.434 3.185 - 3.200: 0.4381% ( 44) 00:15:31.434 3.200 - 3.215: 0.8645% ( 73) 00:15:31.434 3.215 - 3.230: 1.3902% ( 90) 00:15:31.434 3.230 - 3.246: 2.5292% ( 195) 00:15:31.434 3.246 - 3.261: 5.6016% ( 526) 00:15:31.434 3.261 - 3.276: 10.4848% ( 836) 00:15:31.434 3.276 - 3.291: 16.2734% ( 991) 00:15:31.434 3.291 - 3.307: 23.0140% ( 1154) 00:15:31.434 3.307 - 3.322: 29.5210% ( 1114) 00:15:31.434 3.322 - 3.337: 35.0350% ( 944) 00:15:31.434 3.337 - 3.352: 40.5607% ( 946) 00:15:31.434 3.352 - 3.368: 46.2967% ( 982) 00:15:31.434 3.368 - 3.383: 51.4778% ( 887) 00:15:31.434 3.383 - 3.398: 56.5187% ( 863) 00:15:31.434 3.398 - 3.413: 63.2593% ( 1154) 00:15:31.434 3.413 - 3.429: 69.6554% ( 1095) 00:15:31.434 3.429 - 3.444: 74.6320% ( 852) 00:15:31.434 3.444 - 3.459: 79.7605% ( 878) 00:15:31.434 3.459 - 3.474: 83.1192% ( 575) 00:15:31.434 3.474 - 3.490: 85.3797% ( 387) 00:15:31.434 3.490 - 3.505: 86.7231% ( 230) 00:15:31.434 3.505 - 3.520: 87.5175% ( 136) 00:15:31.434 3.520 - 3.535: 87.9731% ( 78) 00:15:31.434 3.535 - 3.550: 88.4696% ( 85) 00:15:31.434 3.550 - 3.566: 89.1063% ( 109) 00:15:31.434 3.566 - 3.581: 90.0584% ( 163) 00:15:31.434 3.581 - 3.596: 90.9463% ( 152) 00:15:31.434 3.596 - 3.611: 91.8575% ( 156) 00:15:31.434 3.611 - 3.627: 92.7687% ( 156) 00:15:31.434 3.627 - 3.642: 93.6624% ( 153) 00:15:31.434 3.642 - 3.657: 94.5152% ( 146) 00:15:31.434 3.657 - 3.672: 95.5607% ( 179) 00:15:31.434 3.672 - 3.688: 96.5012% ( 161) 00:15:31.434 3.688 - 3.703: 97.1379% ( 109) 00:15:31.434 3.703 - 3.718: 97.6986% ( 96) 00:15:31.434 3.718 - 3.733: 98.1776% ( 82) 00:15:31.434 3.733 - 3.749: 98.5397% ( 62) 00:15:31.434 3.749 - 3.764: 98.8201% ( 48) 00:15:31.434 3.764 - 3.779: 99.0713% ( 43) 00:15:31.434 3.779 - 3.794: 99.2523% ( 31) 00:15:31.434 3.794 - 3.810: 99.3808% ( 22) 00:15:31.434 3.810 - 3.825: 99.4509% ( 12) 00:15:31.434 3.825 - 3.840: 99.5386% ( 15) 00:15:31.434 3.840 - 3.855: 99.5794% ( 7) 00:15:31.434 3.931 - 3.962: 99.5853% ( 1) 00:15:31.434 3.962 - 3.992: 99.5911% ( 1) 00:15:31.434 5.150 - 5.181: 99.5970% ( 1) 00:15:31.434 5.516 - 5.547: 99.6028% ( 1) 00:15:31.434 5.577 - 5.608: 99.6086% ( 1) 00:15:31.434 5.608 - 5.638: 99.6145% ( 1) 00:15:31.435 5.669 - 5.699: 99.6203% ( 1) 00:15:31.435 5.821 - 5.851: 99.6262% ( 1) 00:15:31.435 5.912 - 5.943: 99.6379% ( 2) 00:15:31.435 5.973 - 6.004: 99.6437% ( 1) 00:15:31.435 6.004 - 6.034: 99.6554% ( 2) 00:15:31.435 6.126 - 6.156: 99.6612% ( 1) 00:15:31.435 6.156 - 6.187: 99.6671% ( 1) 00:15:31.435 6.187 - 6.217: 99.6729% ( 1) 00:15:31.435 6.278 - 6.309: 99.6846% ( 2) 00:15:31.435 6.339 - 6.370: 99.6963% ( 2) 00:15:31.435 6.370 - 6.400: 99.7021% ( 1) 00:15:31.435 6.583 - 6.613: 99.7079% ( 1) 00:15:31.435 6.644 - 6.674: 99.7138% ( 1) 00:15:31.435 6.705 - 6.735: 99.7196% ( 1) 00:15:31.435 6.827 - 6.857: 99.7255% ( 1) 00:15:31.435 6.857 - 6.888: 99.7430% ( 3) 00:15:31.435 6.949 - 6.979: 99.7488% ( 1) 00:15:31.435 7.010 - 7.040: 99.7547% ( 1) 00:15:31.435 7.040 - 7.070: 99.7605% ( 1) 00:15:31.435 7.070 - 7.101: 99.7722% ( 2) 00:15:31.435 7.101 - 7.131: 99.7780% ( 1) 00:15:31.435 7.162 - 7.192: 99.7956% ( 3) 00:15:31.435 7.192 - 7.223: 99.8014% ( 1) 00:15:31.435 7.223 - 7.253: 99.8072% ( 1) 00:15:31.435 7.284 - 7.314: 99.8131% ( 1) 00:15:31.435 7.406 - 7.436: 99.8248% ( 2) 00:15:31.435 7.436 - 7.467: 99.8306% ( 1) 00:15:31.435 7.589 - 7.619: 99.8364% ( 1) 00:15:31.435 7.710 - 7.741: 99.8423% ( 1) 00:15:31.435 7.863 - 7.924: 99.8481% ( 1) 00:15:31.435 7.924 - 7.985: 99.8540% ( 1) 00:15:31.435 8.168 - 8.229: 99.8598% ( 1) 00:15:31.435 9.143 - 9.204: 99.8657% ( 1) 00:15:31.435 9.387 - 9.448: 99.8715% ( 1) 00:15:31.435 11.642 - 11.703: 99.8773% ( 1) 00:15:31.435 13.531 - 13.592: 99.8832% ( 1) 00:15:31.435 13.714 - 13.775: 99.8890% ( 1) 00:15:31.435 13.836 - 13.897: 99.8949% ( 1) 00:15:31.435 14.994 - 15.055: 99.9007% ( 1) 00:15:31.435 18.530 - 18.651: 99.9065% ( 1) 00:15:31.435 18.773 - 18.895: 99.9124% ( 1) 00:15:31.435 19.139 - 19.261: 99.9182% ( 1) 00:15:31.435 19.261 - 19.383: 99.9299% ( 2) 00:15:31.435 3994.575 - 4025.783: 100.0000% ( 12) 00:15:31.435 00:15:31.435 Complete histogram 00:15:31.435 ================== 00:15:31.435 Range in us Cumulative Count 00:15:31.435 1.699 - 1.707: 0.0175% ( 3) 00:15:31.435 1.707 - 1.714: 0.2687% ( 43) 00:15:31.435 1.714 - 1.722: 1.8283% ( 267) 00:15:31.435 1.722 - 1.730: 3.9311% ( 360) 00:15:31.435 1.730 - 1.737: 4.9124% ( 168) 00:15:31.435 1.737 - 1.745: 5.2044% ( 50) 00:15:31.435 1.745 - 1.752: 5.3738% ( 29) 00:15:31.435 1.752 - 1.760: 6.4486% ( 184) 00:15:31.435 1.760 - 1.768: 15.3914% ( 1531) 00:15:31.435 1.768 - 1.775: 41.8984% ( 4538) 00:15:31.435 1.775 - 1.783: 68.4171% ( 4540) 00:15:31.435 1.783 - 1.790: 79.0187% ( 1815) 00:15:31.435 1.790 - 1.798: 82.0327% ( 516) 00:15:31.435 1.798 - 1.806: 84.4100% ( 407) 00:15:31.435 1.806 - 1.813: 88.0549% ( 624) 00:15:31.435 1.813 - 1.821: 92.0269% ( 680) 00:15:31.435 1.821 - 1.829: 94.1297% ( 360) 00:15:31.435 1.829 - 1.836: 95.0000% ( 149) 00:15:31.435 1.836 - 1.844: 95.8294% ( 142) 00:15:31.435 1.844 - 1.851: 97.0269% ( 205) 00:15:31.435 1.851 - 1.859: 97.9322% ( 155) 00:15:31.435 1.859 - 1.867: 98.3586% ( 73) 00:15:31.435 1.867 - 1.874: 98.5748% ( 37) 00:15:31.435 1.874 - 1.882: 98.7033% ( 22) 00:15:31.435 1.882 - 1.890: 98.8376% ( 23) 00:15:31.435 1.890 - 1.897: 98.9428% ( 18) 00:15:31.435 1.897 - 1.905: 99.0187% ( 13) 00:15:31.435 1.905 - 1.912: 99.0596% ( 7) 00:15:31.435 1.912 - 1.920: 99.0713% ( 2) 00:15:31.435 1.920 - 1.928: 99.1005% ( 5) 00:15:31.435 1.928 - 1.935: 99.1063% ( 1) 00:15:31.435 1.935 - 1.943: 99.1121% ( 1) 00:15:31.435 1.943 - 1.950: 99.1238% ( 2) 00:15:31.435 1.950 - 1.966: 99.1589% ( 6) 00:15:31.435 1.966 - 1.981: 99.1822% ( 4) 00:15:31.435 1.981 - 1.996: 99.1881% ( 1) 00:15:31.435 1.996 - 2.011: 99.1998% ( 2) 00:15:31.435 2.133 - 2.149: 99.2056% ( 1) 00:15:31.435 2.225 - 2.240: 99.2231% ( 3) 00:15:31.435 2.286 - 2.301: 99.2290% ( 1) 00:15:31.435 2.301 - 2.316: 99.2348% ( 1) 00:15:31.435 3.840 - 3.855: 99.2407% ( 1) 00:15:31.435 3.931 - 3.962: 99.2465% ( 1) 00:15:31.435 4.236 - 4.267: 99.2582% ( 2) 00:15:31.435 4.328 - 4.358: 99.2640% ( 1) 00:15:31.435 4.389 - 4.419: 99.2699% ( 1) 00:15:31.435 4.419 - 4.450: 99.2757% ( 1) 00:15:31.435 4.450 - 4.480: 99.2815% ( 1) 00:15:31.435 4.480 - 4.510: 99.2874% ( 1) 00:15:31.435 4.541 - 4.571: 99.2932% ( 1) 00:15:31.435 4.571 - 4.602: 99.3107% ( 3) 00:15:31.435 4.632 - 4.663: 99.3166% ( 1) 00:15:31.435 4.724 - 4.754: 99.3224% ( 1) 00:15:31.435 4.815 - 4.846: 99.3283% ( 1) 00:15:31.435 4.846 - 4.876: 99.3341% ( 1) 00:15:31.435 4.907 - 4.937: 99.3400% ( 1) 00:15:31.435 5.059 - 5.090: 99.3458% ( 1) 00:15:31.435 5.486 - 5.516: 99.3516% ( 1) 00:15:31.435 5.516 - 5.547: 99.3575% ( 1) 00:15:31.435 5.669 - 5.699: 99.3633% ( 1) 00:15:31.435 5.912 - 5.943: 99.3692% ( 1) 00:15:31.435 5.943 - 5.973: 99.3808% ( 2) 00:15:31.435 6.004 - 6.034: 99.3867% ( 1) 00:15:31.435 6.156 - 6.187: 99.3925% ( 1) 00:15:31.435 6.187 - 6.217: 99.3984% ( 1) 00:15:31.435 6.309 - 6.339: 99.4042% ( 1) 00:15:31.435 6.430 - 6.461: 99.4159% ( 2) 00:15:31.435 6.583 - 6.613: 99.4217% ( 1) 00:15:31.435 6.613 - 6.6[2024-07-24 18:10:24.418480] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.435 44: 99.4276% ( 1) 00:15:31.435 6.949 - 6.979: 99.4334% ( 1) 00:15:31.435 7.284 - 7.314: 99.4393% ( 1) 00:15:31.435 7.558 - 7.589: 99.4451% ( 1) 00:15:31.435 8.716 - 8.777: 99.4509% ( 1) 00:15:31.435 9.204 - 9.265: 99.4568% ( 1) 00:15:31.435 10.118 - 10.179: 99.4626% ( 1) 00:15:31.435 11.337 - 11.398: 99.4685% ( 1) 00:15:31.435 11.825 - 11.886: 99.4743% ( 1) 00:15:31.435 13.349 - 13.410: 99.4801% ( 1) 00:15:31.435 17.432 - 17.554: 99.4860% ( 1) 00:15:31.435 17.676 - 17.798: 99.4918% ( 1) 00:15:31.435 38.766 - 39.010: 99.4977% ( 1) 00:15:31.435 146.286 - 147.261: 99.5035% ( 1) 00:15:31.435 2044.099 - 2059.703: 99.5093% ( 1) 00:15:31.435 3978.971 - 3994.575: 99.5152% ( 1) 00:15:31.435 3994.575 - 4025.783: 99.9942% ( 82) 00:15:31.435 5991.863 - 6023.070: 100.0000% ( 1) 00:15:31.435 00:15:31.435 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:31.435 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:31.435 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:31.435 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:31.435 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:31.694 [ 00:15:31.694 { 00:15:31.694 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:31.694 "subtype": "Discovery", 00:15:31.694 "listen_addresses": [], 00:15:31.694 "allow_any_host": true, 00:15:31.694 "hosts": [] 00:15:31.694 }, 00:15:31.694 { 00:15:31.694 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:31.694 "subtype": "NVMe", 00:15:31.694 "listen_addresses": [ 00:15:31.694 { 00:15:31.694 "trtype": "VFIOUSER", 00:15:31.694 "adrfam": "IPv4", 00:15:31.694 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:31.694 "trsvcid": "0" 00:15:31.694 } 00:15:31.694 ], 00:15:31.694 "allow_any_host": true, 00:15:31.694 "hosts": [], 00:15:31.694 "serial_number": "SPDK1", 00:15:31.694 "model_number": "SPDK bdev Controller", 00:15:31.694 "max_namespaces": 32, 00:15:31.694 "min_cntlid": 1, 00:15:31.694 "max_cntlid": 65519, 00:15:31.694 "namespaces": [ 00:15:31.694 { 00:15:31.694 "nsid": 1, 00:15:31.694 "bdev_name": "Malloc1", 00:15:31.694 "name": "Malloc1", 00:15:31.694 "nguid": "0DD4689A07CE46F4B1875348BF43883B", 00:15:31.694 "uuid": "0dd4689a-07ce-46f4-b187-5348bf43883b" 00:15:31.694 }, 00:15:31.694 { 00:15:31.694 "nsid": 2, 00:15:31.694 "bdev_name": "Malloc3", 00:15:31.694 "name": "Malloc3", 00:15:31.694 "nguid": "87366851DDB045C39F7478B057399A7B", 00:15:31.694 "uuid": "87366851-ddb0-45c3-9f74-78b057399a7b" 00:15:31.694 } 00:15:31.694 ] 00:15:31.694 }, 00:15:31.694 { 00:15:31.694 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:31.694 "subtype": "NVMe", 00:15:31.694 "listen_addresses": [ 00:15:31.694 { 00:15:31.694 "trtype": "VFIOUSER", 00:15:31.694 "adrfam": "IPv4", 00:15:31.694 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:31.694 "trsvcid": "0" 00:15:31.694 } 00:15:31.694 ], 00:15:31.694 "allow_any_host": true, 00:15:31.694 "hosts": [], 00:15:31.694 "serial_number": "SPDK2", 00:15:31.694 "model_number": "SPDK bdev Controller", 00:15:31.694 "max_namespaces": 32, 00:15:31.694 "min_cntlid": 1, 00:15:31.694 "max_cntlid": 65519, 00:15:31.694 "namespaces": [ 00:15:31.694 { 00:15:31.694 "nsid": 1, 00:15:31.694 "bdev_name": "Malloc2", 00:15:31.694 "name": "Malloc2", 00:15:31.694 "nguid": "1EC5F1FAFF924C538729FF77E1F45325", 00:15:31.694 "uuid": "1ec5f1fa-ff92-4c53-8729-ff77e1f45325" 00:15:31.694 } 00:15:31.694 ] 00:15:31.694 } 00:15:31.694 ] 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3390408 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:31.694 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:31.694 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.953 [2024-07-24 18:10:24.794925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.953 Malloc4 00:15:31.953 18:10:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:31.953 [2024-07-24 18:10:25.023630] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:32.212 Asynchronous Event Request test 00:15:32.212 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.212 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:32.212 Registering asynchronous event callbacks... 00:15:32.212 Starting namespace attribute notice tests for all controllers... 00:15:32.212 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:32.212 aer_cb - Changed Namespace 00:15:32.212 Cleaning up... 00:15:32.212 [ 00:15:32.212 { 00:15:32.212 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:32.212 "subtype": "Discovery", 00:15:32.212 "listen_addresses": [], 00:15:32.212 "allow_any_host": true, 00:15:32.212 "hosts": [] 00:15:32.212 }, 00:15:32.212 { 00:15:32.212 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:32.212 "subtype": "NVMe", 00:15:32.212 "listen_addresses": [ 00:15:32.212 { 00:15:32.212 "trtype": "VFIOUSER", 00:15:32.212 "adrfam": "IPv4", 00:15:32.212 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:32.212 "trsvcid": "0" 00:15:32.212 } 00:15:32.212 ], 00:15:32.212 "allow_any_host": true, 00:15:32.212 "hosts": [], 00:15:32.212 "serial_number": "SPDK1", 00:15:32.212 "model_number": "SPDK bdev Controller", 00:15:32.212 "max_namespaces": 32, 00:15:32.212 "min_cntlid": 1, 00:15:32.212 "max_cntlid": 65519, 00:15:32.212 "namespaces": [ 00:15:32.212 { 00:15:32.212 "nsid": 1, 00:15:32.212 "bdev_name": "Malloc1", 00:15:32.212 "name": "Malloc1", 00:15:32.212 "nguid": "0DD4689A07CE46F4B1875348BF43883B", 00:15:32.212 "uuid": "0dd4689a-07ce-46f4-b187-5348bf43883b" 00:15:32.212 }, 00:15:32.212 { 00:15:32.212 "nsid": 2, 00:15:32.212 "bdev_name": "Malloc3", 00:15:32.212 "name": "Malloc3", 00:15:32.212 "nguid": "87366851DDB045C39F7478B057399A7B", 00:15:32.212 "uuid": "87366851-ddb0-45c3-9f74-78b057399a7b" 00:15:32.212 } 00:15:32.212 ] 00:15:32.212 }, 00:15:32.212 { 00:15:32.212 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:32.212 "subtype": "NVMe", 00:15:32.212 "listen_addresses": [ 00:15:32.212 { 00:15:32.212 "trtype": "VFIOUSER", 00:15:32.212 "adrfam": "IPv4", 00:15:32.212 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:32.212 "trsvcid": "0" 00:15:32.212 } 00:15:32.212 ], 00:15:32.212 "allow_any_host": true, 00:15:32.212 "hosts": [], 00:15:32.212 "serial_number": "SPDK2", 00:15:32.212 "model_number": "SPDK bdev Controller", 00:15:32.212 "max_namespaces": 32, 00:15:32.212 "min_cntlid": 1, 00:15:32.212 "max_cntlid": 65519, 00:15:32.212 "namespaces": [ 00:15:32.212 { 00:15:32.212 "nsid": 1, 00:15:32.212 "bdev_name": "Malloc2", 00:15:32.212 "name": "Malloc2", 00:15:32.212 "nguid": "1EC5F1FAFF924C538729FF77E1F45325", 00:15:32.212 "uuid": "1ec5f1fa-ff92-4c53-8729-ff77e1f45325" 00:15:32.212 }, 00:15:32.212 { 00:15:32.212 "nsid": 2, 00:15:32.212 "bdev_name": "Malloc4", 00:15:32.212 "name": "Malloc4", 00:15:32.212 "nguid": "33FEC7D90A33469CB11C919E6EDBFD49", 00:15:32.212 "uuid": "33fec7d9-0a33-469c-b11c-919e6edbfd49" 00:15:32.212 } 00:15:32.212 ] 00:15:32.212 } 00:15:32.212 ] 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3390408 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3382704 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3382704 ']' 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3382704 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3382704 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3382704' 00:15:32.212 killing process with pid 3382704 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3382704 00:15:32.212 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3382704 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3390570 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3390570' 00:15:32.472 Process pid: 3390570 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3390570 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 3390570 ']' 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.472 18:10:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:32.747 [2024-07-24 18:10:25.589831] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:32.747 [2024-07-24 18:10:25.590699] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:15:32.747 [2024-07-24 18:10:25.590736] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.747 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.747 [2024-07-24 18:10:25.646660] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:32.747 [2024-07-24 18:10:25.718551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.747 [2024-07-24 18:10:25.718588] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.747 [2024-07-24 18:10:25.718595] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.747 [2024-07-24 18:10:25.718600] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.747 [2024-07-24 18:10:25.718605] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.747 [2024-07-24 18:10:25.718654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:32.747 [2024-07-24 18:10:25.718773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:32.747 [2024-07-24 18:10:25.718836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:32.747 [2024-07-24 18:10:25.718837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.747 [2024-07-24 18:10:25.800303] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:32.747 [2024-07-24 18:10:25.800425] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:32.747 [2024-07-24 18:10:25.800702] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:32.747 [2024-07-24 18:10:25.801052] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:32.747 [2024-07-24 18:10:25.801326] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:33.314 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.314 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:33.314 18:10:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:34.691 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:34.691 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:34.691 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:34.691 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:34.691 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:34.691 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:34.691 Malloc1 00:15:34.949 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:34.949 18:10:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:35.208 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:35.467 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.467 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:35.467 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:35.467 Malloc2 00:15:35.467 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:35.726 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:35.984 18:10:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:35.984 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:35.984 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3390570 00:15:35.984 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 3390570 ']' 00:15:35.984 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 3390570 00:15:35.984 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:35.984 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.984 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3390570 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3390570' 00:15:36.244 killing process with pid 3390570 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 3390570 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 3390570 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:36.244 00:15:36.244 real 0m51.223s 00:15:36.244 user 3m22.883s 00:15:36.244 sys 0m3.493s 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:36.244 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:36.244 ************************************ 00:15:36.244 END TEST nvmf_vfio_user 00:15:36.244 ************************************ 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:36.503 ************************************ 00:15:36.503 START TEST nvmf_vfio_user_nvme_compliance 00:15:36.503 ************************************ 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:36.503 * Looking for test storage... 00:15:36.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.503 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3391334 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3391334' 00:15:36.504 Process pid: 3391334 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3391334 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 3391334 ']' 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.504 18:10:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:36.504 [2024-07-24 18:10:29.532878] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:15:36.504 [2024-07-24 18:10:29.532926] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.504 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.763 [2024-07-24 18:10:29.588170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:36.763 [2024-07-24 18:10:29.660375] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.763 [2024-07-24 18:10:29.660418] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.763 [2024-07-24 18:10:29.660425] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.763 [2024-07-24 18:10:29.660431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.763 [2024-07-24 18:10:29.660436] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.763 [2024-07-24 18:10:29.660476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.763 [2024-07-24 18:10:29.660577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.763 [2024-07-24 18:10:29.660578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:37.330 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:37.331 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:37.331 18:10:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:38.267 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:38.267 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:38.267 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:38.267 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.267 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.525 malloc0 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.525 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:38.526 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.526 18:10:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:38.526 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.526 00:15:38.526 00:15:38.526 CUnit - A unit testing framework for C - Version 2.1-3 00:15:38.526 http://cunit.sourceforge.net/ 00:15:38.526 00:15:38.526 00:15:38.526 Suite: nvme_compliance 00:15:38.526 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 18:10:31.571501] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.526 [2024-07-24 18:10:31.572842] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:38.526 [2024-07-24 18:10:31.572857] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:38.526 [2024-07-24 18:10:31.572863] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:38.526 [2024-07-24 18:10:31.574519] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.526 passed 00:15:38.785 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 18:10:31.656118] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.785 [2024-07-24 18:10:31.659136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.785 passed 00:15:38.785 Test: admin_identify_ns ...[2024-07-24 18:10:31.738783] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:38.785 [2024-07-24 18:10:31.799503] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:38.785 [2024-07-24 18:10:31.807501] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:38.785 [2024-07-24 18:10:31.828613] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:38.785 passed 00:15:39.044 Test: admin_get_features_mandatory_features ...[2024-07-24 18:10:31.905568] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.044 [2024-07-24 18:10:31.908589] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.044 passed 00:15:39.044 Test: admin_get_features_optional_features ...[2024-07-24 18:10:31.984113] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.044 [2024-07-24 18:10:31.987140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.044 passed 00:15:39.044 Test: admin_set_features_number_of_queues ...[2024-07-24 18:10:32.065232] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.303 [2024-07-24 18:10:32.173595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.303 passed 00:15:39.303 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 18:10:32.247408] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.303 [2024-07-24 18:10:32.250427] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.303 passed 00:15:39.303 Test: admin_get_log_page_with_lpo ...[2024-07-24 18:10:32.328139] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.561 [2024-07-24 18:10:32.396500] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:39.561 [2024-07-24 18:10:32.409574] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.561 passed 00:15:39.561 Test: fabric_property_get ...[2024-07-24 18:10:32.487401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.561 [2024-07-24 18:10:32.488636] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:39.561 [2024-07-24 18:10:32.490421] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.561 passed 00:15:39.561 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 18:10:32.567970] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.561 [2024-07-24 18:10:32.569189] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:39.561 [2024-07-24 18:10:32.570990] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.561 passed 00:15:39.819 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 18:10:32.645857] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.819 [2024-07-24 18:10:32.736499] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:39.819 [2024-07-24 18:10:32.752498] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:39.819 [2024-07-24 18:10:32.757584] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.819 passed 00:15:39.819 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 18:10:32.835421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:39.819 [2024-07-24 18:10:32.836668] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:39.819 [2024-07-24 18:10:32.838445] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:39.819 passed 00:15:40.078 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 18:10:32.915245] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.078 [2024-07-24 18:10:32.991509] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:40.078 [2024-07-24 18:10:33.015508] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:40.078 [2024-07-24 18:10:33.020577] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.078 passed 00:15:40.078 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 18:10:33.099449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.078 [2024-07-24 18:10:33.100682] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:40.078 [2024-07-24 18:10:33.100703] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:40.078 [2024-07-24 18:10:33.102475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.078 passed 00:15:40.337 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 18:10:33.179265] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.337 [2024-07-24 18:10:33.271500] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:40.337 [2024-07-24 18:10:33.279499] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:40.337 [2024-07-24 18:10:33.287499] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:40.337 [2024-07-24 18:10:33.295499] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:40.337 [2024-07-24 18:10:33.322598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.337 passed 00:15:40.337 Test: admin_create_io_sq_verify_pc ...[2024-07-24 18:10:33.395332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.337 [2024-07-24 18:10:33.414503] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:40.594 [2024-07-24 18:10:33.432533] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.594 passed 00:15:40.594 Test: admin_create_io_qp_max_qps ...[2024-07-24 18:10:33.507055] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.530 [2024-07-24 18:10:34.607500] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:42.120 [2024-07-24 18:10:34.999033] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.120 passed 00:15:42.120 Test: admin_create_io_sq_shared_cq ...[2024-07-24 18:10:35.080001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.384 [2024-07-24 18:10:35.211498] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:42.384 [2024-07-24 18:10:35.248579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.384 passed 00:15:42.384 00:15:42.384 Run Summary: Type Total Ran Passed Failed Inactive 00:15:42.384 suites 1 1 n/a 0 0 00:15:42.384 tests 18 18 18 0 0 00:15:42.384 asserts 360 360 360 0 n/a 00:15:42.384 00:15:42.384 Elapsed time = 1.511 seconds 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3391334 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 3391334 ']' 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 3391334 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3391334 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3391334' 00:15:42.384 killing process with pid 3391334 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 3391334 00:15:42.384 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 3391334 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:42.642 00:15:42.642 real 0m6.172s 00:15:42.642 user 0m17.616s 00:15:42.642 sys 0m0.466s 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:42.642 ************************************ 00:15:42.642 END TEST nvmf_vfio_user_nvme_compliance 00:15:42.642 ************************************ 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:42.642 18:10:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:42.642 ************************************ 00:15:42.642 START TEST nvmf_vfio_user_fuzz 00:15:42.642 ************************************ 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:42.643 * Looking for test storage... 00:15:42.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:42.643 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3392335 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3392335' 00:15:42.902 Process pid: 3392335 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3392335 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 3392335 ']' 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.902 18:10:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:43.838 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:43.838 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:43.838 18:10:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.775 malloc0 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:44.775 18:10:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:16.846 Fuzzing completed. Shutting down the fuzz application 00:16:16.846 00:16:16.846 Dumping successful admin opcodes: 00:16:16.846 8, 9, 10, 24, 00:16:16.846 Dumping successful io opcodes: 00:16:16.846 0, 00:16:16.846 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1074493, total successful commands: 4236, random_seed: 1075126144 00:16:16.846 NS: 0x200003a1ef00 admin qp, Total commands completed: 266085, total successful commands: 2142, random_seed: 3894001856 00:16:16.846 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3392335 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 3392335 ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 3392335 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3392335 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3392335' 00:16:16.847 killing process with pid 3392335 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 3392335 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 3392335 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:16.847 00:16:16.847 real 0m32.756s 00:16:16.847 user 0m32.096s 00:16:16.847 sys 0m29.694s 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:16.847 ************************************ 00:16:16.847 END TEST nvmf_vfio_user_fuzz 00:16:16.847 ************************************ 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:16.847 ************************************ 00:16:16.847 START TEST nvmf_auth_target 00:16:16.847 ************************************ 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:16.847 * Looking for test storage... 00:16:16.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:16.847 18:11:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:21.050 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.050 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:21.051 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:21.051 Found net devices under 0000:86:00.0: cvl_0_0 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:21.051 Found net devices under 0000:86:00.1: cvl_0_1 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:21.051 18:11:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:21.051 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:21.051 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:16:21.051 00:16:21.051 --- 10.0.0.2 ping statistics --- 00:16:21.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.051 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:21.051 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:21.051 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:16:21.051 00:16:21.051 --- 10.0.0.1 ping statistics --- 00:16:21.051 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:21.051 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3400825 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3400825 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3400825 ']' 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:21.051 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:21.987 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:21.987 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:21.987 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:21.987 18:11:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3400974 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4f0d7cc473c9e6d4dca66bda8f77060f567c7346f1f1422 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5fz 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4f0d7cc473c9e6d4dca66bda8f77060f567c7346f1f1422 0 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4f0d7cc473c9e6d4dca66bda8f77060f567c7346f1f1422 0 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4f0d7cc473c9e6d4dca66bda8f77060f567c7346f1f1422 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5fz 00:16:21.987 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5fz 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.5fz 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2ae79daacef3259dce001350e543ed5794cffd57765051fffa143f3e0dcee8a2 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ns5 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2ae79daacef3259dce001350e543ed5794cffd57765051fffa143f3e0dcee8a2 3 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2ae79daacef3259dce001350e543ed5794cffd57765051fffa143f3e0dcee8a2 3 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2ae79daacef3259dce001350e543ed5794cffd57765051fffa143f3e0dcee8a2 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ns5 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ns5 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.ns5 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=96319b97d67b3cdae6ba87a2664f83ba 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.CSR 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 96319b97d67b3cdae6ba87a2664f83ba 1 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 96319b97d67b3cdae6ba87a2664f83ba 1 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=96319b97d67b3cdae6ba87a2664f83ba 00:16:22.246 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.CSR 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.CSR 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.CSR 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=efe3cacb27fac70d3432e32c596dcfcfc26c1821bef6e6b2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KR5 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key efe3cacb27fac70d3432e32c596dcfcfc26c1821bef6e6b2 2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 efe3cacb27fac70d3432e32c596dcfcfc26c1821bef6e6b2 2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=efe3cacb27fac70d3432e32c596dcfcfc26c1821bef6e6b2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KR5 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KR5 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.KR5 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b06c78f7ce2b348f2a0d7fda2fdd22b02bc1cbd8da7c0e8c 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uJ0 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b06c78f7ce2b348f2a0d7fda2fdd22b02bc1cbd8da7c0e8c 2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b06c78f7ce2b348f2a0d7fda2fdd22b02bc1cbd8da7c0e8c 2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b06c78f7ce2b348f2a0d7fda2fdd22b02bc1cbd8da7c0e8c 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uJ0 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uJ0 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.uJ0 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=6f2329dae84a2fb1da349e7557eb03bf 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eH8 00:16:22.247 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 6f2329dae84a2fb1da349e7557eb03bf 1 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 6f2329dae84a2fb1da349e7557eb03bf 1 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=6f2329dae84a2fb1da349e7557eb03bf 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eH8 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eH8 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.eH8 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=47da6a3918b1a7227567977fc0fb3c1dbf2b7db7840bb94a3b7ab4c0ace95f4b 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kqP 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 47da6a3918b1a7227567977fc0fb3c1dbf2b7db7840bb94a3b7ab4c0ace95f4b 3 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 47da6a3918b1a7227567977fc0fb3c1dbf2b7db7840bb94a3b7ab4c0ace95f4b 3 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=47da6a3918b1a7227567977fc0fb3c1dbf2b7db7840bb94a3b7ab4c0ace95f4b 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kqP 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kqP 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.kqP 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3400825 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3400825 ']' 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.506 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3400974 /var/tmp/host.sock 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3400974 ']' 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:22.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5fz 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.5fz 00:16:22.765 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.5fz 00:16:23.023 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.ns5 ]] 00:16:23.023 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ns5 00:16:23.023 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.023 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.023 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.023 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ns5 00:16:23.023 18:11:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ns5 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.CSR 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.CSR 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.CSR 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.KR5 ]] 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KR5 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KR5 00:16:23.281 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KR5 00:16:23.539 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:23.539 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uJ0 00:16:23.539 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.539 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.539 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.539 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uJ0 00:16:23.539 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uJ0 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.eH8 ]] 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eH8 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eH8 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.eH8 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kqP 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kqP 00:16:23.798 18:11:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kqP 00:16:24.056 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:24.056 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:24.056 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.056 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.056 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.056 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.315 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.574 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.574 { 00:16:24.574 "cntlid": 1, 00:16:24.574 "qid": 0, 00:16:24.574 "state": "enabled", 00:16:24.574 "thread": "nvmf_tgt_poll_group_000", 00:16:24.574 "listen_address": { 00:16:24.574 "trtype": "TCP", 00:16:24.574 "adrfam": "IPv4", 00:16:24.574 "traddr": "10.0.0.2", 00:16:24.574 "trsvcid": "4420" 00:16:24.574 }, 00:16:24.574 "peer_address": { 00:16:24.574 "trtype": "TCP", 00:16:24.574 "adrfam": "IPv4", 00:16:24.574 "traddr": "10.0.0.1", 00:16:24.574 "trsvcid": "45812" 00:16:24.574 }, 00:16:24.574 "auth": { 00:16:24.574 "state": "completed", 00:16:24.574 "digest": "sha256", 00:16:24.574 "dhgroup": "null" 00:16:24.574 } 00:16:24.574 } 00:16:24.574 ]' 00:16:24.574 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.833 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.833 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.833 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:24.833 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.834 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.834 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.834 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.092 18:11:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.660 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.919 00:16:25.919 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.919 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.919 18:11:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.177 { 00:16:26.177 "cntlid": 3, 00:16:26.177 "qid": 0, 00:16:26.177 "state": "enabled", 00:16:26.177 "thread": "nvmf_tgt_poll_group_000", 00:16:26.177 "listen_address": { 00:16:26.177 "trtype": "TCP", 00:16:26.177 "adrfam": "IPv4", 00:16:26.177 "traddr": "10.0.0.2", 00:16:26.177 "trsvcid": "4420" 00:16:26.177 }, 00:16:26.177 "peer_address": { 00:16:26.177 "trtype": "TCP", 00:16:26.177 "adrfam": "IPv4", 00:16:26.177 "traddr": "10.0.0.1", 00:16:26.177 "trsvcid": "45842" 00:16:26.177 }, 00:16:26.177 "auth": { 00:16:26.177 "state": "completed", 00:16:26.177 "digest": "sha256", 00:16:26.177 "dhgroup": "null" 00:16:26.177 } 00:16:26.177 } 00:16:26.177 ]' 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.177 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.436 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:16:27.003 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.003 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:27.003 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.003 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.003 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.003 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.003 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.004 18:11:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.263 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:27.522 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.522 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.522 { 00:16:27.522 "cntlid": 5, 00:16:27.522 "qid": 0, 00:16:27.522 "state": "enabled", 00:16:27.522 "thread": "nvmf_tgt_poll_group_000", 00:16:27.522 "listen_address": { 00:16:27.522 "trtype": "TCP", 00:16:27.522 "adrfam": "IPv4", 00:16:27.522 "traddr": "10.0.0.2", 00:16:27.522 "trsvcid": "4420" 00:16:27.522 }, 00:16:27.522 "peer_address": { 00:16:27.522 "trtype": "TCP", 00:16:27.522 "adrfam": "IPv4", 00:16:27.522 "traddr": "10.0.0.1", 00:16:27.522 "trsvcid": "48336" 00:16:27.522 }, 00:16:27.522 "auth": { 00:16:27.522 "state": "completed", 00:16:27.522 "digest": "sha256", 00:16:27.522 "dhgroup": "null" 00:16:27.522 } 00:16:27.522 } 00:16:27.523 ]' 00:16:27.523 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.523 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.523 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.782 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:27.782 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.782 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.782 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.782 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.782 18:11:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.391 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.663 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:28.922 00:16:28.922 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.922 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.922 18:11:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.180 { 00:16:29.180 "cntlid": 7, 00:16:29.180 "qid": 0, 00:16:29.180 "state": "enabled", 00:16:29.180 "thread": "nvmf_tgt_poll_group_000", 00:16:29.180 "listen_address": { 00:16:29.180 "trtype": "TCP", 00:16:29.180 "adrfam": "IPv4", 00:16:29.180 "traddr": "10.0.0.2", 00:16:29.180 "trsvcid": "4420" 00:16:29.180 }, 00:16:29.180 "peer_address": { 00:16:29.180 "trtype": "TCP", 00:16:29.180 "adrfam": "IPv4", 00:16:29.180 "traddr": "10.0.0.1", 00:16:29.180 "trsvcid": "48352" 00:16:29.180 }, 00:16:29.180 "auth": { 00:16:29.180 "state": "completed", 00:16:29.180 "digest": "sha256", 00:16:29.180 "dhgroup": "null" 00:16:29.180 } 00:16:29.180 } 00:16:29.180 ]' 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.180 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.439 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.006 18:11:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.006 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.264 00:16:30.264 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.264 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.264 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.522 { 00:16:30.522 "cntlid": 9, 00:16:30.522 "qid": 0, 00:16:30.522 "state": "enabled", 00:16:30.522 "thread": "nvmf_tgt_poll_group_000", 00:16:30.522 "listen_address": { 00:16:30.522 "trtype": "TCP", 00:16:30.522 "adrfam": "IPv4", 00:16:30.522 "traddr": "10.0.0.2", 00:16:30.522 "trsvcid": "4420" 00:16:30.522 }, 00:16:30.522 "peer_address": { 00:16:30.522 "trtype": "TCP", 00:16:30.522 "adrfam": "IPv4", 00:16:30.522 "traddr": "10.0.0.1", 00:16:30.522 "trsvcid": "48378" 00:16:30.522 }, 00:16:30.522 "auth": { 00:16:30.522 "state": "completed", 00:16:30.522 "digest": "sha256", 00:16:30.522 "dhgroup": "ffdhe2048" 00:16:30.522 } 00:16:30.522 } 00:16:30.522 ]' 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.522 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.780 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.780 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.780 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.780 18:11:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:16:31.347 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.347 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:31.347 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.348 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.348 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.348 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.348 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.348 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.606 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.865 00:16:31.865 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.865 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.865 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.124 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.124 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.124 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.124 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.124 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.124 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.124 { 00:16:32.124 "cntlid": 11, 00:16:32.124 "qid": 0, 00:16:32.124 "state": "enabled", 00:16:32.124 "thread": "nvmf_tgt_poll_group_000", 00:16:32.124 "listen_address": { 00:16:32.124 "trtype": "TCP", 00:16:32.124 "adrfam": "IPv4", 00:16:32.124 "traddr": "10.0.0.2", 00:16:32.124 "trsvcid": "4420" 00:16:32.124 }, 00:16:32.124 "peer_address": { 00:16:32.124 "trtype": "TCP", 00:16:32.124 "adrfam": "IPv4", 00:16:32.124 "traddr": "10.0.0.1", 00:16:32.124 "trsvcid": "48402" 00:16:32.124 }, 00:16:32.124 "auth": { 00:16:32.124 "state": "completed", 00:16:32.124 "digest": "sha256", 00:16:32.124 "dhgroup": "ffdhe2048" 00:16:32.124 } 00:16:32.124 } 00:16:32.124 ]' 00:16:32.124 18:11:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.124 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.124 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.124 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.124 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.124 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.124 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.124 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.383 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:16:32.950 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.950 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:32.950 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.950 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.950 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.950 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.951 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.951 18:11:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.951 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.209 00:16:33.209 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.209 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.209 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.469 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.469 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.469 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.469 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.470 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.470 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.470 { 00:16:33.470 "cntlid": 13, 00:16:33.470 "qid": 0, 00:16:33.470 "state": "enabled", 00:16:33.470 "thread": "nvmf_tgt_poll_group_000", 00:16:33.470 "listen_address": { 00:16:33.470 "trtype": "TCP", 00:16:33.470 "adrfam": "IPv4", 00:16:33.470 "traddr": "10.0.0.2", 00:16:33.470 "trsvcid": "4420" 00:16:33.470 }, 00:16:33.470 "peer_address": { 00:16:33.470 "trtype": "TCP", 00:16:33.470 "adrfam": "IPv4", 00:16:33.470 "traddr": "10.0.0.1", 00:16:33.470 "trsvcid": "48424" 00:16:33.470 }, 00:16:33.470 "auth": { 00:16:33.470 "state": "completed", 00:16:33.470 "digest": "sha256", 00:16:33.470 "dhgroup": "ffdhe2048" 00:16:33.470 } 00:16:33.470 } 00:16:33.470 ]' 00:16:33.470 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.470 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.470 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.470 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.470 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.728 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.728 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.728 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.728 18:11:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.295 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.554 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:34.813 00:16:34.813 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.813 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.813 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.071 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.072 { 00:16:35.072 "cntlid": 15, 00:16:35.072 "qid": 0, 00:16:35.072 "state": "enabled", 00:16:35.072 "thread": "nvmf_tgt_poll_group_000", 00:16:35.072 "listen_address": { 00:16:35.072 "trtype": "TCP", 00:16:35.072 "adrfam": "IPv4", 00:16:35.072 "traddr": "10.0.0.2", 00:16:35.072 "trsvcid": "4420" 00:16:35.072 }, 00:16:35.072 "peer_address": { 00:16:35.072 "trtype": "TCP", 00:16:35.072 "adrfam": "IPv4", 00:16:35.072 "traddr": "10.0.0.1", 00:16:35.072 "trsvcid": "48460" 00:16:35.072 }, 00:16:35.072 "auth": { 00:16:35.072 "state": "completed", 00:16:35.072 "digest": "sha256", 00:16:35.072 "dhgroup": "ffdhe2048" 00:16:35.072 } 00:16:35.072 } 00:16:35.072 ]' 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.072 18:11:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.072 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.072 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.072 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.072 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.072 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.330 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.897 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.156 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.156 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.156 18:11:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.156 00:16:36.156 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.156 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.156 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.414 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.414 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.414 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.414 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.414 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.414 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.414 { 00:16:36.414 "cntlid": 17, 00:16:36.414 "qid": 0, 00:16:36.415 "state": "enabled", 00:16:36.415 "thread": "nvmf_tgt_poll_group_000", 00:16:36.415 "listen_address": { 00:16:36.415 "trtype": "TCP", 00:16:36.415 "adrfam": "IPv4", 00:16:36.415 "traddr": "10.0.0.2", 00:16:36.415 "trsvcid": "4420" 00:16:36.415 }, 00:16:36.415 "peer_address": { 00:16:36.415 "trtype": "TCP", 00:16:36.415 "adrfam": "IPv4", 00:16:36.415 "traddr": "10.0.0.1", 00:16:36.415 "trsvcid": "48492" 00:16:36.415 }, 00:16:36.415 "auth": { 00:16:36.415 "state": "completed", 00:16:36.415 "digest": "sha256", 00:16:36.415 "dhgroup": "ffdhe3072" 00:16:36.415 } 00:16:36.415 } 00:16:36.415 ]' 00:16:36.415 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.415 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.415 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.415 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.674 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.674 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.674 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.674 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.674 18:11:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.242 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.242 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.501 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.759 00:16:37.759 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.759 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.759 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.018 { 00:16:38.018 "cntlid": 19, 00:16:38.018 "qid": 0, 00:16:38.018 "state": "enabled", 00:16:38.018 "thread": "nvmf_tgt_poll_group_000", 00:16:38.018 "listen_address": { 00:16:38.018 "trtype": "TCP", 00:16:38.018 "adrfam": "IPv4", 00:16:38.018 "traddr": "10.0.0.2", 00:16:38.018 "trsvcid": "4420" 00:16:38.018 }, 00:16:38.018 "peer_address": { 00:16:38.018 "trtype": "TCP", 00:16:38.018 "adrfam": "IPv4", 00:16:38.018 "traddr": "10.0.0.1", 00:16:38.018 "trsvcid": "46370" 00:16:38.018 }, 00:16:38.018 "auth": { 00:16:38.018 "state": "completed", 00:16:38.018 "digest": "sha256", 00:16:38.018 "dhgroup": "ffdhe3072" 00:16:38.018 } 00:16:38.018 } 00:16:38.018 ]' 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.018 18:11:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.018 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.018 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.018 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.018 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.018 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.277 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.842 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.101 18:11:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.360 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.360 { 00:16:39.360 "cntlid": 21, 00:16:39.360 "qid": 0, 00:16:39.360 "state": "enabled", 00:16:39.360 "thread": "nvmf_tgt_poll_group_000", 00:16:39.360 "listen_address": { 00:16:39.360 "trtype": "TCP", 00:16:39.360 "adrfam": "IPv4", 00:16:39.360 "traddr": "10.0.0.2", 00:16:39.360 "trsvcid": "4420" 00:16:39.360 }, 00:16:39.360 "peer_address": { 00:16:39.360 "trtype": "TCP", 00:16:39.360 "adrfam": "IPv4", 00:16:39.360 "traddr": "10.0.0.1", 00:16:39.360 "trsvcid": "46396" 00:16:39.360 }, 00:16:39.360 "auth": { 00:16:39.360 "state": "completed", 00:16:39.360 "digest": "sha256", 00:16:39.360 "dhgroup": "ffdhe3072" 00:16:39.360 } 00:16:39.360 } 00:16:39.360 ]' 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.360 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.619 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.619 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.619 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.619 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.619 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.619 18:11:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.184 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.443 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.701 00:16:40.701 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.701 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.701 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.960 { 00:16:40.960 "cntlid": 23, 00:16:40.960 "qid": 0, 00:16:40.960 "state": "enabled", 00:16:40.960 "thread": "nvmf_tgt_poll_group_000", 00:16:40.960 "listen_address": { 00:16:40.960 "trtype": "TCP", 00:16:40.960 "adrfam": "IPv4", 00:16:40.960 "traddr": "10.0.0.2", 00:16:40.960 "trsvcid": "4420" 00:16:40.960 }, 00:16:40.960 "peer_address": { 00:16:40.960 "trtype": "TCP", 00:16:40.960 "adrfam": "IPv4", 00:16:40.960 "traddr": "10.0.0.1", 00:16:40.960 "trsvcid": "46406" 00:16:40.960 }, 00:16:40.960 "auth": { 00:16:40.960 "state": "completed", 00:16:40.960 "digest": "sha256", 00:16:40.960 "dhgroup": "ffdhe3072" 00:16:40.960 } 00:16:40.960 } 00:16:40.960 ]' 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.960 18:11:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.218 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.785 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.045 18:11:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.304 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.304 { 00:16:42.304 "cntlid": 25, 00:16:42.304 "qid": 0, 00:16:42.304 "state": "enabled", 00:16:42.304 "thread": "nvmf_tgt_poll_group_000", 00:16:42.304 "listen_address": { 00:16:42.304 "trtype": "TCP", 00:16:42.304 "adrfam": "IPv4", 00:16:42.304 "traddr": "10.0.0.2", 00:16:42.304 "trsvcid": "4420" 00:16:42.304 }, 00:16:42.304 "peer_address": { 00:16:42.304 "trtype": "TCP", 00:16:42.304 "adrfam": "IPv4", 00:16:42.304 "traddr": "10.0.0.1", 00:16:42.304 "trsvcid": "46428" 00:16:42.304 }, 00:16:42.304 "auth": { 00:16:42.304 "state": "completed", 00:16:42.304 "digest": "sha256", 00:16:42.304 "dhgroup": "ffdhe4096" 00:16:42.304 } 00:16:42.304 } 00:16:42.304 ]' 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.304 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.562 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.562 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.562 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.562 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.562 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.562 18:11:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.129 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.387 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.646 00:16:43.646 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.646 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.646 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:43.904 { 00:16:43.904 "cntlid": 27, 00:16:43.904 "qid": 0, 00:16:43.904 "state": "enabled", 00:16:43.904 "thread": "nvmf_tgt_poll_group_000", 00:16:43.904 "listen_address": { 00:16:43.904 "trtype": "TCP", 00:16:43.904 "adrfam": "IPv4", 00:16:43.904 "traddr": "10.0.0.2", 00:16:43.904 "trsvcid": "4420" 00:16:43.904 }, 00:16:43.904 "peer_address": { 00:16:43.904 "trtype": "TCP", 00:16:43.904 "adrfam": "IPv4", 00:16:43.904 "traddr": "10.0.0.1", 00:16:43.904 "trsvcid": "46454" 00:16:43.904 }, 00:16:43.904 "auth": { 00:16:43.904 "state": "completed", 00:16:43.904 "digest": "sha256", 00:16:43.904 "dhgroup": "ffdhe4096" 00:16:43.904 } 00:16:43.904 } 00:16:43.904 ]' 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.904 18:11:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.163 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:16:44.732 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.732 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:44.732 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.733 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.733 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.733 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.733 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.733 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.010 18:11:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.280 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.280 { 00:16:45.280 "cntlid": 29, 00:16:45.280 "qid": 0, 00:16:45.280 "state": "enabled", 00:16:45.280 "thread": "nvmf_tgt_poll_group_000", 00:16:45.280 "listen_address": { 00:16:45.280 "trtype": "TCP", 00:16:45.280 "adrfam": "IPv4", 00:16:45.280 "traddr": "10.0.0.2", 00:16:45.280 "trsvcid": "4420" 00:16:45.280 }, 00:16:45.280 "peer_address": { 00:16:45.280 "trtype": "TCP", 00:16:45.280 "adrfam": "IPv4", 00:16:45.280 "traddr": "10.0.0.1", 00:16:45.280 "trsvcid": "46482" 00:16:45.280 }, 00:16:45.280 "auth": { 00:16:45.280 "state": "completed", 00:16:45.280 "digest": "sha256", 00:16:45.280 "dhgroup": "ffdhe4096" 00:16:45.280 } 00:16:45.280 } 00:16:45.280 ]' 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.280 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.539 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.539 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.539 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.539 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.539 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.539 18:11:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.106 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.106 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.365 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.624 00:16:46.624 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.624 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.624 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.882 { 00:16:46.882 "cntlid": 31, 00:16:46.882 "qid": 0, 00:16:46.882 "state": "enabled", 00:16:46.882 "thread": "nvmf_tgt_poll_group_000", 00:16:46.882 "listen_address": { 00:16:46.882 "trtype": "TCP", 00:16:46.882 "adrfam": "IPv4", 00:16:46.882 "traddr": "10.0.0.2", 00:16:46.882 "trsvcid": "4420" 00:16:46.882 }, 00:16:46.882 "peer_address": { 00:16:46.882 "trtype": "TCP", 00:16:46.882 "adrfam": "IPv4", 00:16:46.882 "traddr": "10.0.0.1", 00:16:46.882 "trsvcid": "45794" 00:16:46.882 }, 00:16:46.882 "auth": { 00:16:46.882 "state": "completed", 00:16:46.882 "digest": "sha256", 00:16:46.882 "dhgroup": "ffdhe4096" 00:16:46.882 } 00:16:46.882 } 00:16:46.882 ]' 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.882 18:11:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.141 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.708 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.967 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.968 18:11:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.225 00:16:48.225 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.225 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.225 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.483 { 00:16:48.483 "cntlid": 33, 00:16:48.483 "qid": 0, 00:16:48.483 "state": "enabled", 00:16:48.483 "thread": "nvmf_tgt_poll_group_000", 00:16:48.483 "listen_address": { 00:16:48.483 "trtype": "TCP", 00:16:48.483 "adrfam": "IPv4", 00:16:48.483 "traddr": "10.0.0.2", 00:16:48.483 "trsvcid": "4420" 00:16:48.483 }, 00:16:48.483 "peer_address": { 00:16:48.483 "trtype": "TCP", 00:16:48.483 "adrfam": "IPv4", 00:16:48.483 "traddr": "10.0.0.1", 00:16:48.483 "trsvcid": "45830" 00:16:48.483 }, 00:16:48.483 "auth": { 00:16:48.483 "state": "completed", 00:16:48.483 "digest": "sha256", 00:16:48.483 "dhgroup": "ffdhe6144" 00:16:48.483 } 00:16:48.483 } 00:16:48.483 ]' 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.483 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.823 18:11:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.389 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.390 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.390 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.647 00:16:49.647 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.647 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.647 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.905 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.905 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.905 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.906 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.906 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.906 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.906 { 00:16:49.906 "cntlid": 35, 00:16:49.906 "qid": 0, 00:16:49.906 "state": "enabled", 00:16:49.906 "thread": "nvmf_tgt_poll_group_000", 00:16:49.906 "listen_address": { 00:16:49.906 "trtype": "TCP", 00:16:49.906 "adrfam": "IPv4", 00:16:49.906 "traddr": "10.0.0.2", 00:16:49.906 "trsvcid": "4420" 00:16:49.906 }, 00:16:49.906 "peer_address": { 00:16:49.906 "trtype": "TCP", 00:16:49.906 "adrfam": "IPv4", 00:16:49.906 "traddr": "10.0.0.1", 00:16:49.906 "trsvcid": "45858" 00:16:49.906 }, 00:16:49.906 "auth": { 00:16:49.906 "state": "completed", 00:16:49.906 "digest": "sha256", 00:16:49.906 "dhgroup": "ffdhe6144" 00:16:49.906 } 00:16:49.906 } 00:16:49.906 ]' 00:16:49.906 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.906 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.906 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.163 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:50.164 18:11:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.164 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.164 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.164 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.164 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.731 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.731 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.989 18:11:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.247 00:16:51.247 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.247 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.247 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.506 { 00:16:51.506 "cntlid": 37, 00:16:51.506 "qid": 0, 00:16:51.506 "state": "enabled", 00:16:51.506 "thread": "nvmf_tgt_poll_group_000", 00:16:51.506 "listen_address": { 00:16:51.506 "trtype": "TCP", 00:16:51.506 "adrfam": "IPv4", 00:16:51.506 "traddr": "10.0.0.2", 00:16:51.506 "trsvcid": "4420" 00:16:51.506 }, 00:16:51.506 "peer_address": { 00:16:51.506 "trtype": "TCP", 00:16:51.506 "adrfam": "IPv4", 00:16:51.506 "traddr": "10.0.0.1", 00:16:51.506 "trsvcid": "45880" 00:16:51.506 }, 00:16:51.506 "auth": { 00:16:51.506 "state": "completed", 00:16:51.506 "digest": "sha256", 00:16:51.506 "dhgroup": "ffdhe6144" 00:16:51.506 } 00:16:51.506 } 00:16:51.506 ]' 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.506 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.764 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.764 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.764 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.764 18:11:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.330 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.588 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:52.846 00:16:52.846 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.846 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.846 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.104 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.104 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.104 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.104 18:11:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.104 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.104 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.104 { 00:16:53.104 "cntlid": 39, 00:16:53.104 "qid": 0, 00:16:53.104 "state": "enabled", 00:16:53.104 "thread": "nvmf_tgt_poll_group_000", 00:16:53.104 "listen_address": { 00:16:53.104 "trtype": "TCP", 00:16:53.104 "adrfam": "IPv4", 00:16:53.104 "traddr": "10.0.0.2", 00:16:53.104 "trsvcid": "4420" 00:16:53.104 }, 00:16:53.104 "peer_address": { 00:16:53.104 "trtype": "TCP", 00:16:53.104 "adrfam": "IPv4", 00:16:53.104 "traddr": "10.0.0.1", 00:16:53.104 "trsvcid": "45906" 00:16:53.104 }, 00:16:53.104 "auth": { 00:16:53.104 "state": "completed", 00:16:53.104 "digest": "sha256", 00:16:53.104 "dhgroup": "ffdhe6144" 00:16:53.105 } 00:16:53.105 } 00:16:53.105 ]' 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.105 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.363 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:53.927 18:11:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.185 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.442 00:16:54.442 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.442 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.442 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.699 { 00:16:54.699 "cntlid": 41, 00:16:54.699 "qid": 0, 00:16:54.699 "state": "enabled", 00:16:54.699 "thread": "nvmf_tgt_poll_group_000", 00:16:54.699 "listen_address": { 00:16:54.699 "trtype": "TCP", 00:16:54.699 "adrfam": "IPv4", 00:16:54.699 "traddr": "10.0.0.2", 00:16:54.699 "trsvcid": "4420" 00:16:54.699 }, 00:16:54.699 "peer_address": { 00:16:54.699 "trtype": "TCP", 00:16:54.699 "adrfam": "IPv4", 00:16:54.699 "traddr": "10.0.0.1", 00:16:54.699 "trsvcid": "45934" 00:16:54.699 }, 00:16:54.699 "auth": { 00:16:54.699 "state": "completed", 00:16:54.699 "digest": "sha256", 00:16:54.699 "dhgroup": "ffdhe8192" 00:16:54.699 } 00:16:54.699 } 00:16:54.699 ]' 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.699 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.957 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:54.957 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.957 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.957 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.957 18:11:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.957 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.523 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.780 18:11:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:56.345 00:16:56.345 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.345 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.345 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.345 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.345 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.345 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.345 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.603 { 00:16:56.603 "cntlid": 43, 00:16:56.603 "qid": 0, 00:16:56.603 "state": "enabled", 00:16:56.603 "thread": "nvmf_tgt_poll_group_000", 00:16:56.603 "listen_address": { 00:16:56.603 "trtype": "TCP", 00:16:56.603 "adrfam": "IPv4", 00:16:56.603 "traddr": "10.0.0.2", 00:16:56.603 "trsvcid": "4420" 00:16:56.603 }, 00:16:56.603 "peer_address": { 00:16:56.603 "trtype": "TCP", 00:16:56.603 "adrfam": "IPv4", 00:16:56.603 "traddr": "10.0.0.1", 00:16:56.603 "trsvcid": "45970" 00:16:56.603 }, 00:16:56.603 "auth": { 00:16:56.603 "state": "completed", 00:16:56.603 "digest": "sha256", 00:16:56.603 "dhgroup": "ffdhe8192" 00:16:56.603 } 00:16:56.603 } 00:16:56.603 ]' 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.603 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.861 18:11:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:57.428 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.021 00:16:58.021 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:58.021 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:58.021 18:11:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:58.278 { 00:16:58.278 "cntlid": 45, 00:16:58.278 "qid": 0, 00:16:58.278 "state": "enabled", 00:16:58.278 "thread": "nvmf_tgt_poll_group_000", 00:16:58.278 "listen_address": { 00:16:58.278 "trtype": "TCP", 00:16:58.278 "adrfam": "IPv4", 00:16:58.278 "traddr": "10.0.0.2", 00:16:58.278 "trsvcid": "4420" 00:16:58.278 }, 00:16:58.278 "peer_address": { 00:16:58.278 "trtype": "TCP", 00:16:58.278 "adrfam": "IPv4", 00:16:58.278 "traddr": "10.0.0.1", 00:16:58.278 "trsvcid": "51936" 00:16:58.278 }, 00:16:58.278 "auth": { 00:16:58.278 "state": "completed", 00:16:58.278 "digest": "sha256", 00:16:58.278 "dhgroup": "ffdhe8192" 00:16:58.278 } 00:16:58.278 } 00:16:58.278 ]' 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.278 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.536 18:11:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.102 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:59.360 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:16:59.360 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:59.360 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:59.360 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.361 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.619 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.877 { 00:16:59.877 "cntlid": 47, 00:16:59.877 "qid": 0, 00:16:59.877 "state": "enabled", 00:16:59.877 "thread": "nvmf_tgt_poll_group_000", 00:16:59.877 "listen_address": { 00:16:59.877 "trtype": "TCP", 00:16:59.877 "adrfam": "IPv4", 00:16:59.877 "traddr": "10.0.0.2", 00:16:59.877 "trsvcid": "4420" 00:16:59.877 }, 00:16:59.877 "peer_address": { 00:16:59.877 "trtype": "TCP", 00:16:59.877 "adrfam": "IPv4", 00:16:59.877 "traddr": "10.0.0.1", 00:16:59.877 "trsvcid": "51970" 00:16:59.877 }, 00:16:59.877 "auth": { 00:16:59.877 "state": "completed", 00:16:59.877 "digest": "sha256", 00:16:59.877 "dhgroup": "ffdhe8192" 00:16:59.877 } 00:16:59.877 } 00:16:59.877 ]' 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.877 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:00.135 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.135 18:11:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:00.135 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.135 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.135 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.135 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.701 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.701 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.959 18:11:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.218 00:17:01.218 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.218 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.218 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.477 { 00:17:01.477 "cntlid": 49, 00:17:01.477 "qid": 0, 00:17:01.477 "state": "enabled", 00:17:01.477 "thread": "nvmf_tgt_poll_group_000", 00:17:01.477 "listen_address": { 00:17:01.477 "trtype": "TCP", 00:17:01.477 "adrfam": "IPv4", 00:17:01.477 "traddr": "10.0.0.2", 00:17:01.477 "trsvcid": "4420" 00:17:01.477 }, 00:17:01.477 "peer_address": { 00:17:01.477 "trtype": "TCP", 00:17:01.477 "adrfam": "IPv4", 00:17:01.477 "traddr": "10.0.0.1", 00:17:01.477 "trsvcid": "51990" 00:17:01.477 }, 00:17:01.477 "auth": { 00:17:01.477 "state": "completed", 00:17:01.477 "digest": "sha384", 00:17:01.477 "dhgroup": "null" 00:17:01.477 } 00:17:01.477 } 00:17:01.477 ]' 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.477 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.735 18:11:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.302 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.302 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.561 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.561 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.820 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.820 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.820 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.820 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.820 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.820 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.820 { 00:17:02.820 "cntlid": 51, 00:17:02.820 "qid": 0, 00:17:02.820 "state": "enabled", 00:17:02.820 "thread": "nvmf_tgt_poll_group_000", 00:17:02.820 "listen_address": { 00:17:02.820 "trtype": "TCP", 00:17:02.820 "adrfam": "IPv4", 00:17:02.820 "traddr": "10.0.0.2", 00:17:02.820 "trsvcid": "4420" 00:17:02.820 }, 00:17:02.820 "peer_address": { 00:17:02.820 "trtype": "TCP", 00:17:02.820 "adrfam": "IPv4", 00:17:02.820 "traddr": "10.0.0.1", 00:17:02.820 "trsvcid": "52020" 00:17:02.821 }, 00:17:02.821 "auth": { 00:17:02.821 "state": "completed", 00:17:02.821 "digest": "sha384", 00:17:02.821 "dhgroup": "null" 00:17:02.821 } 00:17:02.821 } 00:17:02.821 ]' 00:17:02.821 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.821 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.821 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.821 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:02.821 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.079 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.079 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.079 18:11:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.079 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.645 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:03.903 18:11:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.162 00:17:04.162 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.162 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.162 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.420 { 00:17:04.420 "cntlid": 53, 00:17:04.420 "qid": 0, 00:17:04.420 "state": "enabled", 00:17:04.420 "thread": "nvmf_tgt_poll_group_000", 00:17:04.420 "listen_address": { 00:17:04.420 "trtype": "TCP", 00:17:04.420 "adrfam": "IPv4", 00:17:04.420 "traddr": "10.0.0.2", 00:17:04.420 "trsvcid": "4420" 00:17:04.420 }, 00:17:04.420 "peer_address": { 00:17:04.420 "trtype": "TCP", 00:17:04.420 "adrfam": "IPv4", 00:17:04.420 "traddr": "10.0.0.1", 00:17:04.420 "trsvcid": "52052" 00:17:04.420 }, 00:17:04.420 "auth": { 00:17:04.420 "state": "completed", 00:17:04.420 "digest": "sha384", 00:17:04.420 "dhgroup": "null" 00:17:04.420 } 00:17:04.420 } 00:17:04.420 ]' 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.420 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.679 18:11:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.246 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:05.505 00:17:05.505 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.505 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.505 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.764 { 00:17:05.764 "cntlid": 55, 00:17:05.764 "qid": 0, 00:17:05.764 "state": "enabled", 00:17:05.764 "thread": "nvmf_tgt_poll_group_000", 00:17:05.764 "listen_address": { 00:17:05.764 "trtype": "TCP", 00:17:05.764 "adrfam": "IPv4", 00:17:05.764 "traddr": "10.0.0.2", 00:17:05.764 "trsvcid": "4420" 00:17:05.764 }, 00:17:05.764 "peer_address": { 00:17:05.764 "trtype": "TCP", 00:17:05.764 "adrfam": "IPv4", 00:17:05.764 "traddr": "10.0.0.1", 00:17:05.764 "trsvcid": "52070" 00:17:05.764 }, 00:17:05.764 "auth": { 00:17:05.764 "state": "completed", 00:17:05.764 "digest": "sha384", 00:17:05.764 "dhgroup": "null" 00:17:05.764 } 00:17:05.764 } 00:17:05.764 ]' 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.764 18:11:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.022 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.590 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.591 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:06.849 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:07.108 00:17:07.108 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.108 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.108 18:11:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.108 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.108 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.108 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.108 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.108 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.108 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.108 { 00:17:07.108 "cntlid": 57, 00:17:07.108 "qid": 0, 00:17:07.108 "state": "enabled", 00:17:07.108 "thread": "nvmf_tgt_poll_group_000", 00:17:07.108 "listen_address": { 00:17:07.108 "trtype": "TCP", 00:17:07.108 "adrfam": "IPv4", 00:17:07.108 "traddr": "10.0.0.2", 00:17:07.108 "trsvcid": "4420" 00:17:07.108 }, 00:17:07.108 "peer_address": { 00:17:07.108 "trtype": "TCP", 00:17:07.108 "adrfam": "IPv4", 00:17:07.108 "traddr": "10.0.0.1", 00:17:07.108 "trsvcid": "44514" 00:17:07.108 }, 00:17:07.108 "auth": { 00:17:07.108 "state": "completed", 00:17:07.108 "digest": "sha384", 00:17:07.108 "dhgroup": "ffdhe2048" 00:17:07.108 } 00:17:07.108 } 00:17:07.108 ]' 00:17:07.108 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.367 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.367 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.367 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:07.367 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.367 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.367 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.367 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.625 18:12:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.191 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:08.449 00:17:08.449 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.449 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.449 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.708 { 00:17:08.708 "cntlid": 59, 00:17:08.708 "qid": 0, 00:17:08.708 "state": "enabled", 00:17:08.708 "thread": "nvmf_tgt_poll_group_000", 00:17:08.708 "listen_address": { 00:17:08.708 "trtype": "TCP", 00:17:08.708 "adrfam": "IPv4", 00:17:08.708 "traddr": "10.0.0.2", 00:17:08.708 "trsvcid": "4420" 00:17:08.708 }, 00:17:08.708 "peer_address": { 00:17:08.708 "trtype": "TCP", 00:17:08.708 "adrfam": "IPv4", 00:17:08.708 "traddr": "10.0.0.1", 00:17:08.708 "trsvcid": "44558" 00:17:08.708 }, 00:17:08.708 "auth": { 00:17:08.708 "state": "completed", 00:17:08.708 "digest": "sha384", 00:17:08.708 "dhgroup": "ffdhe2048" 00:17:08.708 } 00:17:08.708 } 00:17:08.708 ]' 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.708 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.966 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:08.966 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.966 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.966 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.966 18:12:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.966 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.533 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:09.792 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:10.050 00:17:10.050 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.050 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.050 18:12:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.308 { 00:17:10.308 "cntlid": 61, 00:17:10.308 "qid": 0, 00:17:10.308 "state": "enabled", 00:17:10.308 "thread": "nvmf_tgt_poll_group_000", 00:17:10.308 "listen_address": { 00:17:10.308 "trtype": "TCP", 00:17:10.308 "adrfam": "IPv4", 00:17:10.308 "traddr": "10.0.0.2", 00:17:10.308 "trsvcid": "4420" 00:17:10.308 }, 00:17:10.308 "peer_address": { 00:17:10.308 "trtype": "TCP", 00:17:10.308 "adrfam": "IPv4", 00:17:10.308 "traddr": "10.0.0.1", 00:17:10.308 "trsvcid": "44572" 00:17:10.308 }, 00:17:10.308 "auth": { 00:17:10.308 "state": "completed", 00:17:10.308 "digest": "sha384", 00:17:10.308 "dhgroup": "ffdhe2048" 00:17:10.308 } 00:17:10.308 } 00:17:10.308 ]' 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.308 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.565 18:12:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.130 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:11.389 00:17:11.389 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.389 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.389 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.647 { 00:17:11.647 "cntlid": 63, 00:17:11.647 "qid": 0, 00:17:11.647 "state": "enabled", 00:17:11.647 "thread": "nvmf_tgt_poll_group_000", 00:17:11.647 "listen_address": { 00:17:11.647 "trtype": "TCP", 00:17:11.647 "adrfam": "IPv4", 00:17:11.647 "traddr": "10.0.0.2", 00:17:11.647 "trsvcid": "4420" 00:17:11.647 }, 00:17:11.647 "peer_address": { 00:17:11.647 "trtype": "TCP", 00:17:11.647 "adrfam": "IPv4", 00:17:11.647 "traddr": "10.0.0.1", 00:17:11.647 "trsvcid": "44610" 00:17:11.647 }, 00:17:11.647 "auth": { 00:17:11.647 "state": "completed", 00:17:11.647 "digest": "sha384", 00:17:11.647 "dhgroup": "ffdhe2048" 00:17:11.647 } 00:17:11.647 } 00:17:11.647 ]' 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.647 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.906 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.906 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.906 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.906 18:12:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.473 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.732 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.991 00:17:12.991 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:12.991 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.991 18:12:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.250 { 00:17:13.250 "cntlid": 65, 00:17:13.250 "qid": 0, 00:17:13.250 "state": "enabled", 00:17:13.250 "thread": "nvmf_tgt_poll_group_000", 00:17:13.250 "listen_address": { 00:17:13.250 "trtype": "TCP", 00:17:13.250 "adrfam": "IPv4", 00:17:13.250 "traddr": "10.0.0.2", 00:17:13.250 "trsvcid": "4420" 00:17:13.250 }, 00:17:13.250 "peer_address": { 00:17:13.250 "trtype": "TCP", 00:17:13.250 "adrfam": "IPv4", 00:17:13.250 "traddr": "10.0.0.1", 00:17:13.250 "trsvcid": "44630" 00:17:13.250 }, 00:17:13.250 "auth": { 00:17:13.250 "state": "completed", 00:17:13.250 "digest": "sha384", 00:17:13.250 "dhgroup": "ffdhe3072" 00:17:13.250 } 00:17:13.250 } 00:17:13.250 ]' 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.250 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.509 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.075 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.075 18:12:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:14.075 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:14.333 00:17:14.333 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.591 { 00:17:14.591 "cntlid": 67, 00:17:14.591 "qid": 0, 00:17:14.591 "state": "enabled", 00:17:14.591 "thread": "nvmf_tgt_poll_group_000", 00:17:14.591 "listen_address": { 00:17:14.591 "trtype": "TCP", 00:17:14.591 "adrfam": "IPv4", 00:17:14.591 "traddr": "10.0.0.2", 00:17:14.591 "trsvcid": "4420" 00:17:14.591 }, 00:17:14.591 "peer_address": { 00:17:14.591 "trtype": "TCP", 00:17:14.591 "adrfam": "IPv4", 00:17:14.591 "traddr": "10.0.0.1", 00:17:14.591 "trsvcid": "44662" 00:17:14.591 }, 00:17:14.591 "auth": { 00:17:14.591 "state": "completed", 00:17:14.591 "digest": "sha384", 00:17:14.591 "dhgroup": "ffdhe3072" 00:17:14.591 } 00:17:14.591 } 00:17:14.591 ]' 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:14.591 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.850 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.850 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.850 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.850 18:12:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.416 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.416 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.675 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:15.934 00:17:15.934 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.934 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.934 18:12:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.192 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.192 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.192 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.192 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.192 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.192 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:16.192 { 00:17:16.192 "cntlid": 69, 00:17:16.192 "qid": 0, 00:17:16.192 "state": "enabled", 00:17:16.192 "thread": "nvmf_tgt_poll_group_000", 00:17:16.192 "listen_address": { 00:17:16.192 "trtype": "TCP", 00:17:16.192 "adrfam": "IPv4", 00:17:16.192 "traddr": "10.0.0.2", 00:17:16.192 "trsvcid": "4420" 00:17:16.192 }, 00:17:16.192 "peer_address": { 00:17:16.192 "trtype": "TCP", 00:17:16.192 "adrfam": "IPv4", 00:17:16.192 "traddr": "10.0.0.1", 00:17:16.192 "trsvcid": "44698" 00:17:16.192 }, 00:17:16.192 "auth": { 00:17:16.192 "state": "completed", 00:17:16.192 "digest": "sha384", 00:17:16.192 "dhgroup": "ffdhe3072" 00:17:16.192 } 00:17:16.192 } 00:17:16.192 ]' 00:17:16.192 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:16.193 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.193 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.193 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.193 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.193 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.193 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.193 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.451 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.018 18:12:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.277 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:17.277 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.536 { 00:17:17.536 "cntlid": 71, 00:17:17.536 "qid": 0, 00:17:17.536 "state": "enabled", 00:17:17.536 "thread": "nvmf_tgt_poll_group_000", 00:17:17.536 "listen_address": { 00:17:17.536 "trtype": "TCP", 00:17:17.536 "adrfam": "IPv4", 00:17:17.536 "traddr": "10.0.0.2", 00:17:17.536 "trsvcid": "4420" 00:17:17.536 }, 00:17:17.536 "peer_address": { 00:17:17.536 "trtype": "TCP", 00:17:17.536 "adrfam": "IPv4", 00:17:17.536 "traddr": "10.0.0.1", 00:17:17.536 "trsvcid": "47870" 00:17:17.536 }, 00:17:17.536 "auth": { 00:17:17.536 "state": "completed", 00:17:17.536 "digest": "sha384", 00:17:17.536 "dhgroup": "ffdhe3072" 00:17:17.536 } 00:17:17.536 } 00:17:17.536 ]' 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.536 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.795 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.795 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.795 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.795 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.795 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.795 18:12:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.363 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.622 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.880 00:17:18.880 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.880 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.880 18:12:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.139 { 00:17:19.139 "cntlid": 73, 00:17:19.139 "qid": 0, 00:17:19.139 "state": "enabled", 00:17:19.139 "thread": "nvmf_tgt_poll_group_000", 00:17:19.139 "listen_address": { 00:17:19.139 "trtype": "TCP", 00:17:19.139 "adrfam": "IPv4", 00:17:19.139 "traddr": "10.0.0.2", 00:17:19.139 "trsvcid": "4420" 00:17:19.139 }, 00:17:19.139 "peer_address": { 00:17:19.139 "trtype": "TCP", 00:17:19.139 "adrfam": "IPv4", 00:17:19.139 "traddr": "10.0.0.1", 00:17:19.139 "trsvcid": "47910" 00:17:19.139 }, 00:17:19.139 "auth": { 00:17:19.139 "state": "completed", 00:17:19.139 "digest": "sha384", 00:17:19.139 "dhgroup": "ffdhe4096" 00:17:19.139 } 00:17:19.139 } 00:17:19.139 ]' 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.139 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.398 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:19.964 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.964 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:19.965 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.965 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.965 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.965 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.965 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:19.965 18:12:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.233 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.529 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.529 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.529 { 00:17:20.529 "cntlid": 75, 00:17:20.529 "qid": 0, 00:17:20.529 "state": "enabled", 00:17:20.529 "thread": "nvmf_tgt_poll_group_000", 00:17:20.529 "listen_address": { 00:17:20.529 "trtype": "TCP", 00:17:20.529 "adrfam": "IPv4", 00:17:20.530 "traddr": "10.0.0.2", 00:17:20.530 "trsvcid": "4420" 00:17:20.530 }, 00:17:20.530 "peer_address": { 00:17:20.530 "trtype": "TCP", 00:17:20.530 "adrfam": "IPv4", 00:17:20.530 "traddr": "10.0.0.1", 00:17:20.530 "trsvcid": "47942" 00:17:20.530 }, 00:17:20.530 "auth": { 00:17:20.530 "state": "completed", 00:17:20.530 "digest": "sha384", 00:17:20.530 "dhgroup": "ffdhe4096" 00:17:20.530 } 00:17:20.530 } 00:17:20.530 ]' 00:17:20.530 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.530 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.530 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.810 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:20.810 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.810 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.810 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.810 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.810 18:12:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.376 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.635 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.894 00:17:21.894 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.894 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.894 18:12:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:22.154 { 00:17:22.154 "cntlid": 77, 00:17:22.154 "qid": 0, 00:17:22.154 "state": "enabled", 00:17:22.154 "thread": "nvmf_tgt_poll_group_000", 00:17:22.154 "listen_address": { 00:17:22.154 "trtype": "TCP", 00:17:22.154 "adrfam": "IPv4", 00:17:22.154 "traddr": "10.0.0.2", 00:17:22.154 "trsvcid": "4420" 00:17:22.154 }, 00:17:22.154 "peer_address": { 00:17:22.154 "trtype": "TCP", 00:17:22.154 "adrfam": "IPv4", 00:17:22.154 "traddr": "10.0.0.1", 00:17:22.154 "trsvcid": "47964" 00:17:22.154 }, 00:17:22.154 "auth": { 00:17:22.154 "state": "completed", 00:17:22.154 "digest": "sha384", 00:17:22.154 "dhgroup": "ffdhe4096" 00:17:22.154 } 00:17:22.154 } 00:17:22.154 ]' 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.154 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.413 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:22.980 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.981 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:22.981 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.981 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.981 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.981 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.981 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.981 18:12:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.240 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.499 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.499 { 00:17:23.499 "cntlid": 79, 00:17:23.499 "qid": 0, 00:17:23.499 "state": "enabled", 00:17:23.499 "thread": "nvmf_tgt_poll_group_000", 00:17:23.499 "listen_address": { 00:17:23.499 "trtype": "TCP", 00:17:23.499 "adrfam": "IPv4", 00:17:23.499 "traddr": "10.0.0.2", 00:17:23.499 "trsvcid": "4420" 00:17:23.499 }, 00:17:23.499 "peer_address": { 00:17:23.499 "trtype": "TCP", 00:17:23.499 "adrfam": "IPv4", 00:17:23.499 "traddr": "10.0.0.1", 00:17:23.499 "trsvcid": "47994" 00:17:23.499 }, 00:17:23.499 "auth": { 00:17:23.499 "state": "completed", 00:17:23.499 "digest": "sha384", 00:17:23.499 "dhgroup": "ffdhe4096" 00:17:23.499 } 00:17:23.499 } 00:17:23.499 ]' 00:17:23.499 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.758 18:12:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:24.325 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.325 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.325 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:24.325 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.325 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.326 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.326 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.326 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.326 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.326 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.585 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.843 00:17:25.102 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:25.102 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:25.102 18:12:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.102 { 00:17:25.102 "cntlid": 81, 00:17:25.102 "qid": 0, 00:17:25.102 "state": "enabled", 00:17:25.102 "thread": "nvmf_tgt_poll_group_000", 00:17:25.102 "listen_address": { 00:17:25.102 "trtype": "TCP", 00:17:25.102 "adrfam": "IPv4", 00:17:25.102 "traddr": "10.0.0.2", 00:17:25.102 "trsvcid": "4420" 00:17:25.102 }, 00:17:25.102 "peer_address": { 00:17:25.102 "trtype": "TCP", 00:17:25.102 "adrfam": "IPv4", 00:17:25.102 "traddr": "10.0.0.1", 00:17:25.102 "trsvcid": "48026" 00:17:25.102 }, 00:17:25.102 "auth": { 00:17:25.102 "state": "completed", 00:17:25.102 "digest": "sha384", 00:17:25.102 "dhgroup": "ffdhe6144" 00:17:25.102 } 00:17:25.102 } 00:17:25.102 ]' 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.102 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.361 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:25.361 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.361 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.361 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.361 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.361 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:25.928 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.928 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.929 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:25.929 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.929 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.929 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.929 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.929 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:25.929 18:12:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.188 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.447 00:17:26.447 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.447 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.447 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.704 { 00:17:26.704 "cntlid": 83, 00:17:26.704 "qid": 0, 00:17:26.704 "state": "enabled", 00:17:26.704 "thread": "nvmf_tgt_poll_group_000", 00:17:26.704 "listen_address": { 00:17:26.704 "trtype": "TCP", 00:17:26.704 "adrfam": "IPv4", 00:17:26.704 "traddr": "10.0.0.2", 00:17:26.704 "trsvcid": "4420" 00:17:26.704 }, 00:17:26.704 "peer_address": { 00:17:26.704 "trtype": "TCP", 00:17:26.704 "adrfam": "IPv4", 00:17:26.704 "traddr": "10.0.0.1", 00:17:26.704 "trsvcid": "57530" 00:17:26.704 }, 00:17:26.704 "auth": { 00:17:26.704 "state": "completed", 00:17:26.704 "digest": "sha384", 00:17:26.704 "dhgroup": "ffdhe6144" 00:17:26.704 } 00:17:26.704 } 00:17:26.704 ]' 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:26.704 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.962 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.962 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.962 18:12:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.962 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.526 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.526 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:27.784 18:12:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.042 00:17:28.042 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:28.042 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:28.042 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.300 { 00:17:28.300 "cntlid": 85, 00:17:28.300 "qid": 0, 00:17:28.300 "state": "enabled", 00:17:28.300 "thread": "nvmf_tgt_poll_group_000", 00:17:28.300 "listen_address": { 00:17:28.300 "trtype": "TCP", 00:17:28.300 "adrfam": "IPv4", 00:17:28.300 "traddr": "10.0.0.2", 00:17:28.300 "trsvcid": "4420" 00:17:28.300 }, 00:17:28.300 "peer_address": { 00:17:28.300 "trtype": "TCP", 00:17:28.300 "adrfam": "IPv4", 00:17:28.300 "traddr": "10.0.0.1", 00:17:28.300 "trsvcid": "57566" 00:17:28.300 }, 00:17:28.300 "auth": { 00:17:28.300 "state": "completed", 00:17:28.300 "digest": "sha384", 00:17:28.300 "dhgroup": "ffdhe6144" 00:17:28.300 } 00:17:28.300 } 00:17:28.300 ]' 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.300 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.559 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.559 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.559 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.559 18:12:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.125 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.383 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:29.640 00:17:29.640 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.640 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.640 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.898 { 00:17:29.898 "cntlid": 87, 00:17:29.898 "qid": 0, 00:17:29.898 "state": "enabled", 00:17:29.898 "thread": "nvmf_tgt_poll_group_000", 00:17:29.898 "listen_address": { 00:17:29.898 "trtype": "TCP", 00:17:29.898 "adrfam": "IPv4", 00:17:29.898 "traddr": "10.0.0.2", 00:17:29.898 "trsvcid": "4420" 00:17:29.898 }, 00:17:29.898 "peer_address": { 00:17:29.898 "trtype": "TCP", 00:17:29.898 "adrfam": "IPv4", 00:17:29.898 "traddr": "10.0.0.1", 00:17:29.898 "trsvcid": "57580" 00:17:29.898 }, 00:17:29.898 "auth": { 00:17:29.898 "state": "completed", 00:17:29.898 "digest": "sha384", 00:17:29.898 "dhgroup": "ffdhe6144" 00:17:29.898 } 00:17:29.898 } 00:17:29.898 ]' 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.898 18:12:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.156 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.721 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:30.979 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:30.979 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.979 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.979 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:30.979 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:30.980 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.980 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.980 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.980 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.980 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.980 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:30.980 18:12:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.238 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.496 { 00:17:31.496 "cntlid": 89, 00:17:31.496 "qid": 0, 00:17:31.496 "state": "enabled", 00:17:31.496 "thread": "nvmf_tgt_poll_group_000", 00:17:31.496 "listen_address": { 00:17:31.496 "trtype": "TCP", 00:17:31.496 "adrfam": "IPv4", 00:17:31.496 "traddr": "10.0.0.2", 00:17:31.496 "trsvcid": "4420" 00:17:31.496 }, 00:17:31.496 "peer_address": { 00:17:31.496 "trtype": "TCP", 00:17:31.496 "adrfam": "IPv4", 00:17:31.496 "traddr": "10.0.0.1", 00:17:31.496 "trsvcid": "57594" 00:17:31.496 }, 00:17:31.496 "auth": { 00:17:31.496 "state": "completed", 00:17:31.496 "digest": "sha384", 00:17:31.496 "dhgroup": "ffdhe8192" 00:17:31.496 } 00:17:31.496 } 00:17:31.496 ]' 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.496 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.760 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:31.760 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.760 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.760 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.760 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.760 18:12:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.329 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:32.587 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.153 00:17:33.153 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:33.153 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:33.153 18:12:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:33.153 { 00:17:33.153 "cntlid": 91, 00:17:33.153 "qid": 0, 00:17:33.153 "state": "enabled", 00:17:33.153 "thread": "nvmf_tgt_poll_group_000", 00:17:33.153 "listen_address": { 00:17:33.153 "trtype": "TCP", 00:17:33.153 "adrfam": "IPv4", 00:17:33.153 "traddr": "10.0.0.2", 00:17:33.153 "trsvcid": "4420" 00:17:33.153 }, 00:17:33.153 "peer_address": { 00:17:33.153 "trtype": "TCP", 00:17:33.153 "adrfam": "IPv4", 00:17:33.153 "traddr": "10.0.0.1", 00:17:33.153 "trsvcid": "57612" 00:17:33.153 }, 00:17:33.153 "auth": { 00:17:33.153 "state": "completed", 00:17:33.153 "digest": "sha384", 00:17:33.153 "dhgroup": "ffdhe8192" 00:17:33.153 } 00:17:33.153 } 00:17:33.153 ]' 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:33.153 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:33.411 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:33.411 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:33.411 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.411 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.411 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.411 18:12:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.979 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.238 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.239 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:34.807 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.807 { 00:17:34.807 "cntlid": 93, 00:17:34.807 "qid": 0, 00:17:34.807 "state": "enabled", 00:17:34.807 "thread": "nvmf_tgt_poll_group_000", 00:17:34.807 "listen_address": { 00:17:34.807 "trtype": "TCP", 00:17:34.807 "adrfam": "IPv4", 00:17:34.807 "traddr": "10.0.0.2", 00:17:34.807 "trsvcid": "4420" 00:17:34.807 }, 00:17:34.807 "peer_address": { 00:17:34.807 "trtype": "TCP", 00:17:34.807 "adrfam": "IPv4", 00:17:34.807 "traddr": "10.0.0.1", 00:17:34.807 "trsvcid": "57632" 00:17:34.807 }, 00:17:34.807 "auth": { 00:17:34.807 "state": "completed", 00:17:34.807 "digest": "sha384", 00:17:34.807 "dhgroup": "ffdhe8192" 00:17:34.807 } 00:17:34.807 } 00:17:34.807 ]' 00:17:34.807 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.066 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.066 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.066 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.066 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.066 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.066 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.066 18:12:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.066 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.633 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:35.892 18:12:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:36.460 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.460 { 00:17:36.460 "cntlid": 95, 00:17:36.460 "qid": 0, 00:17:36.460 "state": "enabled", 00:17:36.460 "thread": "nvmf_tgt_poll_group_000", 00:17:36.460 "listen_address": { 00:17:36.460 "trtype": "TCP", 00:17:36.460 "adrfam": "IPv4", 00:17:36.460 "traddr": "10.0.0.2", 00:17:36.460 "trsvcid": "4420" 00:17:36.460 }, 00:17:36.460 "peer_address": { 00:17:36.460 "trtype": "TCP", 00:17:36.460 "adrfam": "IPv4", 00:17:36.460 "traddr": "10.0.0.1", 00:17:36.460 "trsvcid": "57666" 00:17:36.460 }, 00:17:36.460 "auth": { 00:17:36.460 "state": "completed", 00:17:36.460 "digest": "sha384", 00:17:36.460 "dhgroup": "ffdhe8192" 00:17:36.460 } 00:17:36.460 } 00:17:36.460 ]' 00:17:36.460 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.719 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.719 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.719 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.719 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.719 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.719 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.719 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.978 18:12:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.584 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.584 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.585 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.843 00:17:37.843 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.843 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.843 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:38.102 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.102 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.102 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.102 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.102 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.102 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:38.102 { 00:17:38.102 "cntlid": 97, 00:17:38.102 "qid": 0, 00:17:38.102 "state": "enabled", 00:17:38.102 "thread": "nvmf_tgt_poll_group_000", 00:17:38.102 "listen_address": { 00:17:38.102 "trtype": "TCP", 00:17:38.102 "adrfam": "IPv4", 00:17:38.102 "traddr": "10.0.0.2", 00:17:38.102 "trsvcid": "4420" 00:17:38.102 }, 00:17:38.102 "peer_address": { 00:17:38.102 "trtype": "TCP", 00:17:38.102 "adrfam": "IPv4", 00:17:38.102 "traddr": "10.0.0.1", 00:17:38.102 "trsvcid": "51574" 00:17:38.102 }, 00:17:38.102 "auth": { 00:17:38.102 "state": "completed", 00:17:38.102 "digest": "sha512", 00:17:38.102 "dhgroup": "null" 00:17:38.102 } 00:17:38.102 } 00:17:38.102 ]' 00:17:38.102 18:12:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:38.102 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.102 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.102 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:38.102 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.103 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.103 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.103 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.361 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:38.928 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.929 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:38.929 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.929 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.929 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.929 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.929 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:38.929 18:12:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:39.188 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.188 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.446 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.446 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.446 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.446 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.446 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.446 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.446 { 00:17:39.446 "cntlid": 99, 00:17:39.446 "qid": 0, 00:17:39.446 "state": "enabled", 00:17:39.446 "thread": "nvmf_tgt_poll_group_000", 00:17:39.446 "listen_address": { 00:17:39.446 "trtype": "TCP", 00:17:39.446 "adrfam": "IPv4", 00:17:39.446 "traddr": "10.0.0.2", 00:17:39.446 "trsvcid": "4420" 00:17:39.446 }, 00:17:39.446 "peer_address": { 00:17:39.446 "trtype": "TCP", 00:17:39.446 "adrfam": "IPv4", 00:17:39.446 "traddr": "10.0.0.1", 00:17:39.446 "trsvcid": "51602" 00:17:39.446 }, 00:17:39.446 "auth": { 00:17:39.446 "state": "completed", 00:17:39.446 "digest": "sha512", 00:17:39.446 "dhgroup": "null" 00:17:39.446 } 00:17:39.447 } 00:17:39.447 ]' 00:17:39.447 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.447 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.447 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.706 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:39.706 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.706 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.706 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.706 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.706 18:12:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.274 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.532 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:40.790 00:17:40.790 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.790 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.790 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.049 { 00:17:41.049 "cntlid": 101, 00:17:41.049 "qid": 0, 00:17:41.049 "state": "enabled", 00:17:41.049 "thread": "nvmf_tgt_poll_group_000", 00:17:41.049 "listen_address": { 00:17:41.049 "trtype": "TCP", 00:17:41.049 "adrfam": "IPv4", 00:17:41.049 "traddr": "10.0.0.2", 00:17:41.049 "trsvcid": "4420" 00:17:41.049 }, 00:17:41.049 "peer_address": { 00:17:41.049 "trtype": "TCP", 00:17:41.049 "adrfam": "IPv4", 00:17:41.049 "traddr": "10.0.0.1", 00:17:41.049 "trsvcid": "51634" 00:17:41.049 }, 00:17:41.049 "auth": { 00:17:41.049 "state": "completed", 00:17:41.049 "digest": "sha512", 00:17:41.049 "dhgroup": "null" 00:17:41.049 } 00:17:41.049 } 00:17:41.049 ]' 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.049 18:12:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.049 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.049 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.049 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.049 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.049 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.308 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:41.874 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.874 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:41.874 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.874 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.874 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.874 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.874 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.875 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.133 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.134 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.134 18:12:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:42.134 00:17:42.134 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.134 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.134 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.392 { 00:17:42.392 "cntlid": 103, 00:17:42.392 "qid": 0, 00:17:42.392 "state": "enabled", 00:17:42.392 "thread": "nvmf_tgt_poll_group_000", 00:17:42.392 "listen_address": { 00:17:42.392 "trtype": "TCP", 00:17:42.392 "adrfam": "IPv4", 00:17:42.392 "traddr": "10.0.0.2", 00:17:42.392 "trsvcid": "4420" 00:17:42.392 }, 00:17:42.392 "peer_address": { 00:17:42.392 "trtype": "TCP", 00:17:42.392 "adrfam": "IPv4", 00:17:42.392 "traddr": "10.0.0.1", 00:17:42.392 "trsvcid": "51662" 00:17:42.392 }, 00:17:42.392 "auth": { 00:17:42.392 "state": "completed", 00:17:42.392 "digest": "sha512", 00:17:42.392 "dhgroup": "null" 00:17:42.392 } 00:17:42.392 } 00:17:42.392 ]' 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.392 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.651 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.651 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.651 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.651 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.651 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.651 18:12:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.217 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.475 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:43.733 00:17:43.733 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.733 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.733 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.991 { 00:17:43.991 "cntlid": 105, 00:17:43.991 "qid": 0, 00:17:43.991 "state": "enabled", 00:17:43.991 "thread": "nvmf_tgt_poll_group_000", 00:17:43.991 "listen_address": { 00:17:43.991 "trtype": "TCP", 00:17:43.991 "adrfam": "IPv4", 00:17:43.991 "traddr": "10.0.0.2", 00:17:43.991 "trsvcid": "4420" 00:17:43.991 }, 00:17:43.991 "peer_address": { 00:17:43.991 "trtype": "TCP", 00:17:43.991 "adrfam": "IPv4", 00:17:43.991 "traddr": "10.0.0.1", 00:17:43.991 "trsvcid": "51688" 00:17:43.991 }, 00:17:43.991 "auth": { 00:17:43.991 "state": "completed", 00:17:43.991 "digest": "sha512", 00:17:43.991 "dhgroup": "ffdhe2048" 00:17:43.991 } 00:17:43.991 } 00:17:43.991 ]' 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.991 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.992 18:12:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.250 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:44.816 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.817 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:44.817 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.817 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.817 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.817 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.817 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:44.817 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.075 18:12:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:45.075 00:17:45.075 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.075 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.075 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.334 { 00:17:45.334 "cntlid": 107, 00:17:45.334 "qid": 0, 00:17:45.334 "state": "enabled", 00:17:45.334 "thread": "nvmf_tgt_poll_group_000", 00:17:45.334 "listen_address": { 00:17:45.334 "trtype": "TCP", 00:17:45.334 "adrfam": "IPv4", 00:17:45.334 "traddr": "10.0.0.2", 00:17:45.334 "trsvcid": "4420" 00:17:45.334 }, 00:17:45.334 "peer_address": { 00:17:45.334 "trtype": "TCP", 00:17:45.334 "adrfam": "IPv4", 00:17:45.334 "traddr": "10.0.0.1", 00:17:45.334 "trsvcid": "51730" 00:17:45.334 }, 00:17:45.334 "auth": { 00:17:45.334 "state": "completed", 00:17:45.334 "digest": "sha512", 00:17:45.334 "dhgroup": "ffdhe2048" 00:17:45.334 } 00:17:45.334 } 00:17:45.334 ]' 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.334 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.593 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:45.593 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.593 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.593 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.593 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.593 18:12:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.159 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.159 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.418 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:46.676 00:17:46.676 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.676 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.676 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.935 { 00:17:46.935 "cntlid": 109, 00:17:46.935 "qid": 0, 00:17:46.935 "state": "enabled", 00:17:46.935 "thread": "nvmf_tgt_poll_group_000", 00:17:46.935 "listen_address": { 00:17:46.935 "trtype": "TCP", 00:17:46.935 "adrfam": "IPv4", 00:17:46.935 "traddr": "10.0.0.2", 00:17:46.935 "trsvcid": "4420" 00:17:46.935 }, 00:17:46.935 "peer_address": { 00:17:46.935 "trtype": "TCP", 00:17:46.935 "adrfam": "IPv4", 00:17:46.935 "traddr": "10.0.0.1", 00:17:46.935 "trsvcid": "48172" 00:17:46.935 }, 00:17:46.935 "auth": { 00:17:46.935 "state": "completed", 00:17:46.935 "digest": "sha512", 00:17:46.935 "dhgroup": "ffdhe2048" 00:17:46.935 } 00:17:46.935 } 00:17:46.935 ]' 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.935 18:12:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.200 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.769 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.027 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.028 18:12:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:48.028 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.286 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.286 { 00:17:48.286 "cntlid": 111, 00:17:48.286 "qid": 0, 00:17:48.286 "state": "enabled", 00:17:48.286 "thread": "nvmf_tgt_poll_group_000", 00:17:48.286 "listen_address": { 00:17:48.286 "trtype": "TCP", 00:17:48.286 "adrfam": "IPv4", 00:17:48.286 "traddr": "10.0.0.2", 00:17:48.286 "trsvcid": "4420" 00:17:48.286 }, 00:17:48.286 "peer_address": { 00:17:48.286 "trtype": "TCP", 00:17:48.286 "adrfam": "IPv4", 00:17:48.286 "traddr": "10.0.0.1", 00:17:48.286 "trsvcid": "48206" 00:17:48.286 }, 00:17:48.286 "auth": { 00:17:48.286 "state": "completed", 00:17:48.286 "digest": "sha512", 00:17:48.286 "dhgroup": "ffdhe2048" 00:17:48.286 } 00:17:48.286 } 00:17:48.287 ]' 00:17:48.287 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.287 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.287 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.545 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.545 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.545 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.545 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.545 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.545 18:12:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.113 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.371 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:49.630 00:17:49.630 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.630 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.630 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.889 { 00:17:49.889 "cntlid": 113, 00:17:49.889 "qid": 0, 00:17:49.889 "state": "enabled", 00:17:49.889 "thread": "nvmf_tgt_poll_group_000", 00:17:49.889 "listen_address": { 00:17:49.889 "trtype": "TCP", 00:17:49.889 "adrfam": "IPv4", 00:17:49.889 "traddr": "10.0.0.2", 00:17:49.889 "trsvcid": "4420" 00:17:49.889 }, 00:17:49.889 "peer_address": { 00:17:49.889 "trtype": "TCP", 00:17:49.889 "adrfam": "IPv4", 00:17:49.889 "traddr": "10.0.0.1", 00:17:49.889 "trsvcid": "48236" 00:17:49.889 }, 00:17:49.889 "auth": { 00:17:49.889 "state": "completed", 00:17:49.889 "digest": "sha512", 00:17:49.889 "dhgroup": "ffdhe3072" 00:17:49.889 } 00:17:49.889 } 00:17:49.889 ]' 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.889 18:12:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.148 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.715 18:12:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.974 00:17:50.974 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.974 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.974 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.233 { 00:17:51.233 "cntlid": 115, 00:17:51.233 "qid": 0, 00:17:51.233 "state": "enabled", 00:17:51.233 "thread": "nvmf_tgt_poll_group_000", 00:17:51.233 "listen_address": { 00:17:51.233 "trtype": "TCP", 00:17:51.233 "adrfam": "IPv4", 00:17:51.233 "traddr": "10.0.0.2", 00:17:51.233 "trsvcid": "4420" 00:17:51.233 }, 00:17:51.233 "peer_address": { 00:17:51.233 "trtype": "TCP", 00:17:51.233 "adrfam": "IPv4", 00:17:51.233 "traddr": "10.0.0.1", 00:17:51.233 "trsvcid": "48272" 00:17:51.233 }, 00:17:51.233 "auth": { 00:17:51.233 "state": "completed", 00:17:51.233 "digest": "sha512", 00:17:51.233 "dhgroup": "ffdhe3072" 00:17:51.233 } 00:17:51.233 } 00:17:51.233 ]' 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.233 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.492 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.492 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.492 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.492 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.492 18:12:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.059 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.319 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.578 00:17:52.578 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.578 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.578 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.837 { 00:17:52.837 "cntlid": 117, 00:17:52.837 "qid": 0, 00:17:52.837 "state": "enabled", 00:17:52.837 "thread": "nvmf_tgt_poll_group_000", 00:17:52.837 "listen_address": { 00:17:52.837 "trtype": "TCP", 00:17:52.837 "adrfam": "IPv4", 00:17:52.837 "traddr": "10.0.0.2", 00:17:52.837 "trsvcid": "4420" 00:17:52.837 }, 00:17:52.837 "peer_address": { 00:17:52.837 "trtype": "TCP", 00:17:52.837 "adrfam": "IPv4", 00:17:52.837 "traddr": "10.0.0.1", 00:17:52.837 "trsvcid": "48306" 00:17:52.837 }, 00:17:52.837 "auth": { 00:17:52.837 "state": "completed", 00:17:52.837 "digest": "sha512", 00:17:52.837 "dhgroup": "ffdhe3072" 00:17:52.837 } 00:17:52.837 } 00:17:52.837 ]' 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.837 18:12:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.096 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:53.663 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.664 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:53.664 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.664 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.664 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.664 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.664 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.664 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.923 18:12:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:53.923 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.182 { 00:17:54.182 "cntlid": 119, 00:17:54.182 "qid": 0, 00:17:54.182 "state": "enabled", 00:17:54.182 "thread": "nvmf_tgt_poll_group_000", 00:17:54.182 "listen_address": { 00:17:54.182 "trtype": "TCP", 00:17:54.182 "adrfam": "IPv4", 00:17:54.182 "traddr": "10.0.0.2", 00:17:54.182 "trsvcid": "4420" 00:17:54.182 }, 00:17:54.182 "peer_address": { 00:17:54.182 "trtype": "TCP", 00:17:54.182 "adrfam": "IPv4", 00:17:54.182 "traddr": "10.0.0.1", 00:17:54.182 "trsvcid": "48316" 00:17:54.182 }, 00:17:54.182 "auth": { 00:17:54.182 "state": "completed", 00:17:54.182 "digest": "sha512", 00:17:54.182 "dhgroup": "ffdhe3072" 00:17:54.182 } 00:17:54.182 } 00:17:54.182 ]' 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.182 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.440 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.440 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.440 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.440 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.440 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.440 18:12:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.007 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.266 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.525 00:17:55.525 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.525 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.525 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.784 { 00:17:55.784 "cntlid": 121, 00:17:55.784 "qid": 0, 00:17:55.784 "state": "enabled", 00:17:55.784 "thread": "nvmf_tgt_poll_group_000", 00:17:55.784 "listen_address": { 00:17:55.784 "trtype": "TCP", 00:17:55.784 "adrfam": "IPv4", 00:17:55.784 "traddr": "10.0.0.2", 00:17:55.784 "trsvcid": "4420" 00:17:55.784 }, 00:17:55.784 "peer_address": { 00:17:55.784 "trtype": "TCP", 00:17:55.784 "adrfam": "IPv4", 00:17:55.784 "traddr": "10.0.0.1", 00:17:55.784 "trsvcid": "48344" 00:17:55.784 }, 00:17:55.784 "auth": { 00:17:55.784 "state": "completed", 00:17:55.784 "digest": "sha512", 00:17:55.784 "dhgroup": "ffdhe4096" 00:17:55.784 } 00:17:55.784 } 00:17:55.784 ]' 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.784 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.043 18:12:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.610 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.610 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.869 00:17:57.127 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:57.127 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:57.127 18:12:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.127 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.127 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.127 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.127 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.127 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.127 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.127 { 00:17:57.127 "cntlid": 123, 00:17:57.127 "qid": 0, 00:17:57.127 "state": "enabled", 00:17:57.127 "thread": "nvmf_tgt_poll_group_000", 00:17:57.127 "listen_address": { 00:17:57.127 "trtype": "TCP", 00:17:57.127 "adrfam": "IPv4", 00:17:57.127 "traddr": "10.0.0.2", 00:17:57.127 "trsvcid": "4420" 00:17:57.127 }, 00:17:57.127 "peer_address": { 00:17:57.127 "trtype": "TCP", 00:17:57.127 "adrfam": "IPv4", 00:17:57.127 "traddr": "10.0.0.1", 00:17:57.127 "trsvcid": "47948" 00:17:57.127 }, 00:17:57.127 "auth": { 00:17:57.127 "state": "completed", 00:17:57.127 "digest": "sha512", 00:17:57.127 "dhgroup": "ffdhe4096" 00:17:57.127 } 00:17:57.127 } 00:17:57.127 ]' 00:17:57.127 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.128 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.128 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.386 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:57.386 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.386 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.386 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.386 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.386 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:17:57.953 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.953 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:57.953 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.953 18:12:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.953 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.953 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:57.953 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:57.953 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.212 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:58.470 00:17:58.470 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.470 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.470 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.729 { 00:17:58.729 "cntlid": 125, 00:17:58.729 "qid": 0, 00:17:58.729 "state": "enabled", 00:17:58.729 "thread": "nvmf_tgt_poll_group_000", 00:17:58.729 "listen_address": { 00:17:58.729 "trtype": "TCP", 00:17:58.729 "adrfam": "IPv4", 00:17:58.729 "traddr": "10.0.0.2", 00:17:58.729 "trsvcid": "4420" 00:17:58.729 }, 00:17:58.729 "peer_address": { 00:17:58.729 "trtype": "TCP", 00:17:58.729 "adrfam": "IPv4", 00:17:58.729 "traddr": "10.0.0.1", 00:17:58.729 "trsvcid": "47968" 00:17:58.729 }, 00:17:58.729 "auth": { 00:17:58.729 "state": "completed", 00:17:58.729 "digest": "sha512", 00:17:58.729 "dhgroup": "ffdhe4096" 00:17:58.729 } 00:17:58.729 } 00:17:58.729 ]' 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.729 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.988 18:12:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.555 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:59.814 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:00.073 00:18:00.073 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:00.073 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:00.073 18:12:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.073 { 00:18:00.073 "cntlid": 127, 00:18:00.073 "qid": 0, 00:18:00.073 "state": "enabled", 00:18:00.073 "thread": "nvmf_tgt_poll_group_000", 00:18:00.073 "listen_address": { 00:18:00.073 "trtype": "TCP", 00:18:00.073 "adrfam": "IPv4", 00:18:00.073 "traddr": "10.0.0.2", 00:18:00.073 "trsvcid": "4420" 00:18:00.073 }, 00:18:00.073 "peer_address": { 00:18:00.073 "trtype": "TCP", 00:18:00.073 "adrfam": "IPv4", 00:18:00.073 "traddr": "10.0.0.1", 00:18:00.073 "trsvcid": "48002" 00:18:00.073 }, 00:18:00.073 "auth": { 00:18:00.073 "state": "completed", 00:18:00.073 "digest": "sha512", 00:18:00.073 "dhgroup": "ffdhe4096" 00:18:00.073 } 00:18:00.073 } 00:18:00.073 ]' 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.073 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.341 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.341 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.341 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.341 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.341 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.341 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:00.994 18:12:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.253 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.511 00:18:01.511 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.511 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.512 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.769 { 00:18:01.769 "cntlid": 129, 00:18:01.769 "qid": 0, 00:18:01.769 "state": "enabled", 00:18:01.769 "thread": "nvmf_tgt_poll_group_000", 00:18:01.769 "listen_address": { 00:18:01.769 "trtype": "TCP", 00:18:01.769 "adrfam": "IPv4", 00:18:01.769 "traddr": "10.0.0.2", 00:18:01.769 "trsvcid": "4420" 00:18:01.769 }, 00:18:01.769 "peer_address": { 00:18:01.769 "trtype": "TCP", 00:18:01.769 "adrfam": "IPv4", 00:18:01.769 "traddr": "10.0.0.1", 00:18:01.769 "trsvcid": "48028" 00:18:01.769 }, 00:18:01.769 "auth": { 00:18:01.769 "state": "completed", 00:18:01.769 "digest": "sha512", 00:18:01.769 "dhgroup": "ffdhe6144" 00:18:01.769 } 00:18:01.769 } 00:18:01.769 ]' 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.769 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.027 18:12:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.594 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.853 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.853 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.853 18:12:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.111 00:18:03.111 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:03.111 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.111 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.370 { 00:18:03.370 "cntlid": 131, 00:18:03.370 "qid": 0, 00:18:03.370 "state": "enabled", 00:18:03.370 "thread": "nvmf_tgt_poll_group_000", 00:18:03.370 "listen_address": { 00:18:03.370 "trtype": "TCP", 00:18:03.370 "adrfam": "IPv4", 00:18:03.370 "traddr": "10.0.0.2", 00:18:03.370 "trsvcid": "4420" 00:18:03.370 }, 00:18:03.370 "peer_address": { 00:18:03.370 "trtype": "TCP", 00:18:03.370 "adrfam": "IPv4", 00:18:03.370 "traddr": "10.0.0.1", 00:18:03.370 "trsvcid": "48052" 00:18:03.370 }, 00:18:03.370 "auth": { 00:18:03.370 "state": "completed", 00:18:03.370 "digest": "sha512", 00:18:03.370 "dhgroup": "ffdhe6144" 00:18:03.370 } 00:18:03.370 } 00:18:03.370 ]' 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.370 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.628 18:12:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.193 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.758 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.758 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.758 { 00:18:04.758 "cntlid": 133, 00:18:04.758 "qid": 0, 00:18:04.758 "state": "enabled", 00:18:04.758 "thread": "nvmf_tgt_poll_group_000", 00:18:04.758 "listen_address": { 00:18:04.758 "trtype": "TCP", 00:18:04.758 "adrfam": "IPv4", 00:18:04.758 "traddr": "10.0.0.2", 00:18:04.758 "trsvcid": "4420" 00:18:04.758 }, 00:18:04.758 "peer_address": { 00:18:04.758 "trtype": "TCP", 00:18:04.759 "adrfam": "IPv4", 00:18:04.759 "traddr": "10.0.0.1", 00:18:04.759 "trsvcid": "48086" 00:18:04.759 }, 00:18:04.759 "auth": { 00:18:04.759 "state": "completed", 00:18:04.759 "digest": "sha512", 00:18:04.759 "dhgroup": "ffdhe6144" 00:18:04.759 } 00:18:04.759 } 00:18:04.759 ]' 00:18:04.759 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.759 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.759 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.759 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.759 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.017 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.017 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.017 18:12:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.017 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.583 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.583 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:05.840 18:12:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:06.098 00:18:06.098 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.098 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.098 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.355 { 00:18:06.355 "cntlid": 135, 00:18:06.355 "qid": 0, 00:18:06.355 "state": "enabled", 00:18:06.355 "thread": "nvmf_tgt_poll_group_000", 00:18:06.355 "listen_address": { 00:18:06.355 "trtype": "TCP", 00:18:06.355 "adrfam": "IPv4", 00:18:06.355 "traddr": "10.0.0.2", 00:18:06.355 "trsvcid": "4420" 00:18:06.355 }, 00:18:06.355 "peer_address": { 00:18:06.355 "trtype": "TCP", 00:18:06.355 "adrfam": "IPv4", 00:18:06.355 "traddr": "10.0.0.1", 00:18:06.355 "trsvcid": "48112" 00:18:06.355 }, 00:18:06.355 "auth": { 00:18:06.355 "state": "completed", 00:18:06.355 "digest": "sha512", 00:18:06.355 "dhgroup": "ffdhe6144" 00:18:06.355 } 00:18:06.355 } 00:18:06.355 ]' 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.355 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.612 18:12:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.178 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.178 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.437 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.003 00:18:08.003 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:08.003 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:08.003 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.003 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.003 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.003 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.003 18:13:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.003 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.003 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.003 { 00:18:08.003 "cntlid": 137, 00:18:08.003 "qid": 0, 00:18:08.003 "state": "enabled", 00:18:08.003 "thread": "nvmf_tgt_poll_group_000", 00:18:08.003 "listen_address": { 00:18:08.004 "trtype": "TCP", 00:18:08.004 "adrfam": "IPv4", 00:18:08.004 "traddr": "10.0.0.2", 00:18:08.004 "trsvcid": "4420" 00:18:08.004 }, 00:18:08.004 "peer_address": { 00:18:08.004 "trtype": "TCP", 00:18:08.004 "adrfam": "IPv4", 00:18:08.004 "traddr": "10.0.0.1", 00:18:08.004 "trsvcid": "40586" 00:18:08.004 }, 00:18:08.004 "auth": { 00:18:08.004 "state": "completed", 00:18:08.004 "digest": "sha512", 00:18:08.004 "dhgroup": "ffdhe8192" 00:18:08.004 } 00:18:08.004 } 00:18:08.004 ]' 00:18:08.004 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.004 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.004 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.262 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:08.262 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.262 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.262 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.262 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.262 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:08.829 18:13:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.088 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.655 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.655 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.655 { 00:18:09.655 "cntlid": 139, 00:18:09.655 "qid": 0, 00:18:09.655 "state": "enabled", 00:18:09.655 "thread": "nvmf_tgt_poll_group_000", 00:18:09.655 "listen_address": { 00:18:09.655 "trtype": "TCP", 00:18:09.656 "adrfam": "IPv4", 00:18:09.656 "traddr": "10.0.0.2", 00:18:09.656 "trsvcid": "4420" 00:18:09.656 }, 00:18:09.656 "peer_address": { 00:18:09.656 "trtype": "TCP", 00:18:09.656 "adrfam": "IPv4", 00:18:09.656 "traddr": "10.0.0.1", 00:18:09.656 "trsvcid": "40626" 00:18:09.656 }, 00:18:09.656 "auth": { 00:18:09.656 "state": "completed", 00:18:09.656 "digest": "sha512", 00:18:09.656 "dhgroup": "ffdhe8192" 00:18:09.656 } 00:18:09.656 } 00:18:09.656 ]' 00:18:09.656 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.914 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.914 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.914 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:09.914 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.914 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.914 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.914 18:13:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.172 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:OTYzMTliOTdkNjdiM2NkYWU2YmE4N2EyNjY0ZjgzYmEzqmAO: --dhchap-ctrl-secret DHHC-1:02:ZWZlM2NhY2IyN2ZhYzcwZDM0MzJlMzJjNTk2ZGNmY2ZjMjZjMTgyMWJlZjZlNmIyxRA4HA==: 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.736 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.737 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.737 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.737 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.737 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.737 18:13:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.304 00:18:11.304 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.304 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.304 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.304 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.304 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.304 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.304 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.562 { 00:18:11.562 "cntlid": 141, 00:18:11.562 "qid": 0, 00:18:11.562 "state": "enabled", 00:18:11.562 "thread": "nvmf_tgt_poll_group_000", 00:18:11.562 "listen_address": { 00:18:11.562 "trtype": "TCP", 00:18:11.562 "adrfam": "IPv4", 00:18:11.562 "traddr": "10.0.0.2", 00:18:11.562 "trsvcid": "4420" 00:18:11.562 }, 00:18:11.562 "peer_address": { 00:18:11.562 "trtype": "TCP", 00:18:11.562 "adrfam": "IPv4", 00:18:11.562 "traddr": "10.0.0.1", 00:18:11.562 "trsvcid": "40648" 00:18:11.562 }, 00:18:11.562 "auth": { 00:18:11.562 "state": "completed", 00:18:11.562 "digest": "sha512", 00:18:11.562 "dhgroup": "ffdhe8192" 00:18:11.562 } 00:18:11.562 } 00:18:11.562 ]' 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.562 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.820 18:13:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:YjA2Yzc4ZjdjZTJiMzQ4ZjJhMGQ3ZmRhMmZkZDIyYjAyYmMxY2JkOGRhN2MwZThj4zGs4w==: --dhchap-ctrl-secret DHHC-1:01:NmYyMzI5ZGFlODRhMmZiMWRhMzQ5ZTc1NTdlYjAzYmbkqxC8: 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.387 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:12.388 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.388 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:12.388 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.388 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.388 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.388 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.388 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:12.955 00:18:12.955 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.955 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.955 18:13:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:13.214 { 00:18:13.214 "cntlid": 143, 00:18:13.214 "qid": 0, 00:18:13.214 "state": "enabled", 00:18:13.214 "thread": "nvmf_tgt_poll_group_000", 00:18:13.214 "listen_address": { 00:18:13.214 "trtype": "TCP", 00:18:13.214 "adrfam": "IPv4", 00:18:13.214 "traddr": "10.0.0.2", 00:18:13.214 "trsvcid": "4420" 00:18:13.214 }, 00:18:13.214 "peer_address": { 00:18:13.214 "trtype": "TCP", 00:18:13.214 "adrfam": "IPv4", 00:18:13.214 "traddr": "10.0.0.1", 00:18:13.214 "trsvcid": "40672" 00:18:13.214 }, 00:18:13.214 "auth": { 00:18:13.214 "state": "completed", 00:18:13.214 "digest": "sha512", 00:18:13.214 "dhgroup": "ffdhe8192" 00:18:13.214 } 00:18:13.214 } 00:18:13.214 ]' 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.214 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.473 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.041 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.041 18:13:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.041 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.609 00:18:14.609 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.609 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.609 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.867 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.867 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.867 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.867 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.867 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.867 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.867 { 00:18:14.867 "cntlid": 145, 00:18:14.867 "qid": 0, 00:18:14.867 "state": "enabled", 00:18:14.867 "thread": "nvmf_tgt_poll_group_000", 00:18:14.867 "listen_address": { 00:18:14.867 "trtype": "TCP", 00:18:14.867 "adrfam": "IPv4", 00:18:14.867 "traddr": "10.0.0.2", 00:18:14.867 "trsvcid": "4420" 00:18:14.867 }, 00:18:14.868 "peer_address": { 00:18:14.868 "trtype": "TCP", 00:18:14.868 "adrfam": "IPv4", 00:18:14.868 "traddr": "10.0.0.1", 00:18:14.868 "trsvcid": "40702" 00:18:14.868 }, 00:18:14.868 "auth": { 00:18:14.868 "state": "completed", 00:18:14.868 "digest": "sha512", 00:18:14.868 "dhgroup": "ffdhe8192" 00:18:14.868 } 00:18:14.868 } 00:18:14.868 ]' 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.868 18:13:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.126 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:YTRmMGQ3Y2M0NzNjOWU2ZDRkY2E2NmJkYThmNzcwNjBmNTY3YzczNDZmMWYxNDIyepjEhg==: --dhchap-ctrl-secret DHHC-1:03:MmFlNzlkYWFjZWYzMjU5ZGNlMDAxMzUwZTU0M2VkNTc5NGNmZmQ1Nzc2NTA1MWZmZmExNDNmM2UwZGNlZThhMuRHrdo=: 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:15.692 18:13:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:16.260 request: 00:18:16.260 { 00:18:16.260 "name": "nvme0", 00:18:16.260 "trtype": "tcp", 00:18:16.261 "traddr": "10.0.0.2", 00:18:16.261 "adrfam": "ipv4", 00:18:16.261 "trsvcid": "4420", 00:18:16.261 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.261 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:16.261 "prchk_reftag": false, 00:18:16.261 "prchk_guard": false, 00:18:16.261 "hdgst": false, 00:18:16.261 "ddgst": false, 00:18:16.261 "dhchap_key": "key2", 00:18:16.261 "method": "bdev_nvme_attach_controller", 00:18:16.261 "req_id": 1 00:18:16.261 } 00:18:16.261 Got JSON-RPC error response 00:18:16.261 response: 00:18:16.261 { 00:18:16.261 "code": -5, 00:18:16.261 "message": "Input/output error" 00:18:16.261 } 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.261 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:16.519 request: 00:18:16.519 { 00:18:16.519 "name": "nvme0", 00:18:16.519 "trtype": "tcp", 00:18:16.519 "traddr": "10.0.0.2", 00:18:16.519 "adrfam": "ipv4", 00:18:16.519 "trsvcid": "4420", 00:18:16.519 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:16.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:16.519 "prchk_reftag": false, 00:18:16.519 "prchk_guard": false, 00:18:16.519 "hdgst": false, 00:18:16.519 "ddgst": false, 00:18:16.519 "dhchap_key": "key1", 00:18:16.519 "dhchap_ctrlr_key": "ckey2", 00:18:16.519 "method": "bdev_nvme_attach_controller", 00:18:16.519 "req_id": 1 00:18:16.519 } 00:18:16.519 Got JSON-RPC error response 00:18:16.519 response: 00:18:16.519 { 00:18:16.519 "code": -5, 00:18:16.519 "message": "Input/output error" 00:18:16.519 } 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key1 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.519 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.520 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:17.087 request: 00:18:17.088 { 00:18:17.088 "name": "nvme0", 00:18:17.088 "trtype": "tcp", 00:18:17.088 "traddr": "10.0.0.2", 00:18:17.088 "adrfam": "ipv4", 00:18:17.088 "trsvcid": "4420", 00:18:17.088 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:17.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:17.088 "prchk_reftag": false, 00:18:17.088 "prchk_guard": false, 00:18:17.088 "hdgst": false, 00:18:17.088 "ddgst": false, 00:18:17.088 "dhchap_key": "key1", 00:18:17.088 "dhchap_ctrlr_key": "ckey1", 00:18:17.088 "method": "bdev_nvme_attach_controller", 00:18:17.088 "req_id": 1 00:18:17.088 } 00:18:17.088 Got JSON-RPC error response 00:18:17.088 response: 00:18:17.088 { 00:18:17.088 "code": -5, 00:18:17.088 "message": "Input/output error" 00:18:17.088 } 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3400825 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3400825 ']' 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3400825 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.088 18:13:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3400825 00:18:17.088 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.088 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.088 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3400825' 00:18:17.088 killing process with pid 3400825 00:18:17.088 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3400825 00:18:17.088 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3400825 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3421497 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3421497 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3421497 ']' 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.347 18:13:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3421497 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 3421497 ']' 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.285 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.545 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:18.804 00:18:18.804 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.804 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.804 18:13:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:19.063 { 00:18:19.063 "cntlid": 1, 00:18:19.063 "qid": 0, 00:18:19.063 "state": "enabled", 00:18:19.063 "thread": "nvmf_tgt_poll_group_000", 00:18:19.063 "listen_address": { 00:18:19.063 "trtype": "TCP", 00:18:19.063 "adrfam": "IPv4", 00:18:19.063 "traddr": "10.0.0.2", 00:18:19.063 "trsvcid": "4420" 00:18:19.063 }, 00:18:19.063 "peer_address": { 00:18:19.063 "trtype": "TCP", 00:18:19.063 "adrfam": "IPv4", 00:18:19.063 "traddr": "10.0.0.1", 00:18:19.063 "trsvcid": "38500" 00:18:19.063 }, 00:18:19.063 "auth": { 00:18:19.063 "state": "completed", 00:18:19.063 "digest": "sha512", 00:18:19.063 "dhgroup": "ffdhe8192" 00:18:19.063 } 00:18:19.063 } 00:18:19.063 ]' 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:19.063 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:19.322 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.322 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.322 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.322 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid 803833e2-2ada-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NDdkYTZhMzkxOGIxYTcyMjc1Njc5NzdmYzBmYjNjMWRiZjJiN2RiNzg0MGJiOTRhM2I3YWI0YzBhY2U5NWY0YpKo2wI=: 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --dhchap-key key3 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:19.890 18:13:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.149 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.408 request: 00:18:20.408 { 00:18:20.408 "name": "nvme0", 00:18:20.408 "trtype": "tcp", 00:18:20.408 "traddr": "10.0.0.2", 00:18:20.408 "adrfam": "ipv4", 00:18:20.408 "trsvcid": "4420", 00:18:20.408 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:20.408 "prchk_reftag": false, 00:18:20.408 "prchk_guard": false, 00:18:20.408 "hdgst": false, 00:18:20.408 "ddgst": false, 00:18:20.408 "dhchap_key": "key3", 00:18:20.408 "method": "bdev_nvme_attach_controller", 00:18:20.408 "req_id": 1 00:18:20.408 } 00:18:20.408 Got JSON-RPC error response 00:18:20.408 response: 00:18:20.408 { 00:18:20.408 "code": -5, 00:18:20.408 "message": "Input/output error" 00:18:20.408 } 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.408 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.666 request: 00:18:20.666 { 00:18:20.666 "name": "nvme0", 00:18:20.666 "trtype": "tcp", 00:18:20.666 "traddr": "10.0.0.2", 00:18:20.666 "adrfam": "ipv4", 00:18:20.666 "trsvcid": "4420", 00:18:20.666 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:20.666 "prchk_reftag": false, 00:18:20.666 "prchk_guard": false, 00:18:20.666 "hdgst": false, 00:18:20.666 "ddgst": false, 00:18:20.666 "dhchap_key": "key3", 00:18:20.666 "method": "bdev_nvme_attach_controller", 00:18:20.666 "req_id": 1 00:18:20.666 } 00:18:20.666 Got JSON-RPC error response 00:18:20.666 response: 00:18:20.666 { 00:18:20.666 "code": -5, 00:18:20.666 "message": "Input/output error" 00:18:20.666 } 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.666 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:20.925 18:13:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:21.184 request: 00:18:21.184 { 00:18:21.184 "name": "nvme0", 00:18:21.184 "trtype": "tcp", 00:18:21.184 "traddr": "10.0.0.2", 00:18:21.184 "adrfam": "ipv4", 00:18:21.184 "trsvcid": "4420", 00:18:21.184 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:21.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562", 00:18:21.184 "prchk_reftag": false, 00:18:21.184 "prchk_guard": false, 00:18:21.184 "hdgst": false, 00:18:21.184 "ddgst": false, 00:18:21.184 "dhchap_key": "key0", 00:18:21.184 "dhchap_ctrlr_key": "key1", 00:18:21.184 "method": "bdev_nvme_attach_controller", 00:18:21.184 "req_id": 1 00:18:21.184 } 00:18:21.184 Got JSON-RPC error response 00:18:21.184 response: 00:18:21.184 { 00:18:21.184 "code": -5, 00:18:21.184 "message": "Input/output error" 00:18:21.184 } 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:21.184 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:21.184 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.443 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.443 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.443 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3400974 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3400974 ']' 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3400974 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3400974 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3400974' 00:18:21.733 killing process with pid 3400974 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3400974 00:18:21.733 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3400974 00:18:22.001 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:22.001 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:22.001 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:22.001 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:22.001 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:22.001 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:22.001 18:13:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:22.001 rmmod nvme_tcp 00:18:22.001 rmmod nvme_fabrics 00:18:22.001 rmmod nvme_keyring 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3421497 ']' 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3421497 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 3421497 ']' 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 3421497 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3421497 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3421497' 00:18:22.001 killing process with pid 3421497 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 3421497 00:18:22.001 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 3421497 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:22.260 18:13:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.5fz /tmp/spdk.key-sha256.CSR /tmp/spdk.key-sha384.uJ0 /tmp/spdk.key-sha512.kqP /tmp/spdk.key-sha512.ns5 /tmp/spdk.key-sha384.KR5 /tmp/spdk.key-sha256.eH8 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:24.796 00:18:24.796 real 2m8.872s 00:18:24.796 user 4m54.875s 00:18:24.796 sys 0m20.771s 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.796 ************************************ 00:18:24.796 END TEST nvmf_auth_target 00:18:24.796 ************************************ 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:24.796 ************************************ 00:18:24.796 START TEST nvmf_bdevio_no_huge 00:18:24.796 ************************************ 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:24.796 * Looking for test storage... 00:18:24.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:24.796 18:13:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:30.069 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:30.069 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:30.069 Found net devices under 0000:86:00.0: cvl_0_0 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:30.069 Found net devices under 0000:86:00.1: cvl_0_1 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:30.069 18:13:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:30.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:30.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:18:30.069 00:18:30.069 --- 10.0.0.2 ping statistics --- 00:18:30.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.069 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:18:30.069 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:30.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:30.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:18:30.069 00:18:30.069 --- 10.0.0.1 ping statistics --- 00:18:30.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:30.070 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3425769 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3425769 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 3425769 ']' 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:30.070 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.070 [2024-07-24 18:13:23.093198] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:18:30.070 [2024-07-24 18:13:23.093239] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:30.327 [2024-07-24 18:13:23.156512] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:30.327 [2024-07-24 18:13:23.239007] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:30.328 [2024-07-24 18:13:23.239045] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:30.328 [2024-07-24 18:13:23.239052] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:30.328 [2024-07-24 18:13:23.239058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:30.328 [2024-07-24 18:13:23.239063] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:30.328 [2024-07-24 18:13:23.239174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:30.328 [2024-07-24 18:13:23.239278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:30.328 [2024-07-24 18:13:23.239385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:30.328 [2024-07-24 18:13:23.239386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.892 [2024-07-24 18:13:23.945520] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:30.892 Malloc0 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.892 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:31.150 [2024-07-24 18:13:23.989838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:31.150 { 00:18:31.150 "params": { 00:18:31.150 "name": "Nvme$subsystem", 00:18:31.150 "trtype": "$TEST_TRANSPORT", 00:18:31.150 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:31.150 "adrfam": "ipv4", 00:18:31.150 "trsvcid": "$NVMF_PORT", 00:18:31.150 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:31.150 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:31.150 "hdgst": ${hdgst:-false}, 00:18:31.150 "ddgst": ${ddgst:-false} 00:18:31.150 }, 00:18:31.150 "method": "bdev_nvme_attach_controller" 00:18:31.150 } 00:18:31.150 EOF 00:18:31.150 )") 00:18:31.150 18:13:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:31.150 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:31.150 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:31.150 18:13:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:31.150 "params": { 00:18:31.150 "name": "Nvme1", 00:18:31.150 "trtype": "tcp", 00:18:31.150 "traddr": "10.0.0.2", 00:18:31.150 "adrfam": "ipv4", 00:18:31.150 "trsvcid": "4420", 00:18:31.150 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:31.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:31.150 "hdgst": false, 00:18:31.150 "ddgst": false 00:18:31.150 }, 00:18:31.150 "method": "bdev_nvme_attach_controller" 00:18:31.150 }' 00:18:31.150 [2024-07-24 18:13:24.037675] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:18:31.150 [2024-07-24 18:13:24.037717] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3426018 ] 00:18:31.150 [2024-07-24 18:13:24.096246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:31.150 [2024-07-24 18:13:24.181426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.150 [2024-07-24 18:13:24.181525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.150 [2024-07-24 18:13:24.181527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.407 I/O targets: 00:18:31.407 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:31.407 00:18:31.407 00:18:31.407 CUnit - A unit testing framework for C - Version 2.1-3 00:18:31.407 http://cunit.sourceforge.net/ 00:18:31.407 00:18:31.407 00:18:31.407 Suite: bdevio tests on: Nvme1n1 00:18:31.407 Test: blockdev write read block ...passed 00:18:31.407 Test: blockdev write zeroes read block ...passed 00:18:31.665 Test: blockdev write zeroes read no split ...passed 00:18:31.665 Test: blockdev write zeroes read split ...passed 00:18:31.665 Test: blockdev write zeroes read split partial ...passed 00:18:31.665 Test: blockdev reset ...[2024-07-24 18:13:24.559768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.665 [2024-07-24 18:13:24.559830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x103a300 (9): Bad file descriptor 00:18:31.665 [2024-07-24 18:13:24.658057] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:31.665 passed 00:18:31.665 Test: blockdev write read 8 blocks ...passed 00:18:31.665 Test: blockdev write read size > 128k ...passed 00:18:31.665 Test: blockdev write read invalid size ...passed 00:18:31.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:31.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:31.665 Test: blockdev write read max offset ...passed 00:18:31.922 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:31.922 Test: blockdev writev readv 8 blocks ...passed 00:18:31.922 Test: blockdev writev readv 30 x 1block ...passed 00:18:31.922 Test: blockdev writev readv block ...passed 00:18:31.922 Test: blockdev writev readv size > 128k ...passed 00:18:31.922 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:31.922 Test: blockdev comparev and writev ...[2024-07-24 18:13:24.951430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.922 [2024-07-24 18:13:24.951458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:31.922 [2024-07-24 18:13:24.951471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.922 [2024-07-24 18:13:24.951483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:31.922 [2024-07-24 18:13:24.951734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.922 [2024-07-24 18:13:24.951745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:31.922 [2024-07-24 18:13:24.951756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.922 [2024-07-24 18:13:24.951763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:31.922 [2024-07-24 18:13:24.951993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.922 [2024-07-24 18:13:24.952002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:31.922 [2024-07-24 18:13:24.952014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.922 [2024-07-24 18:13:24.952021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:31.923 [2024-07-24 18:13:24.952255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.923 [2024-07-24 18:13:24.952265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:31.923 [2024-07-24 18:13:24.952277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:31.923 [2024-07-24 18:13:24.952284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:31.923 passed 00:18:32.180 Test: blockdev nvme passthru rw ...passed 00:18:32.180 Test: blockdev nvme passthru vendor specific ...[2024-07-24 18:13:25.033837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.180 [2024-07-24 18:13:25.033852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:32.180 [2024-07-24 18:13:25.033966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.180 [2024-07-24 18:13:25.033976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:32.180 [2024-07-24 18:13:25.034086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.180 [2024-07-24 18:13:25.034096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:32.180 [2024-07-24 18:13:25.034208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:32.180 [2024-07-24 18:13:25.034218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:32.180 passed 00:18:32.180 Test: blockdev nvme admin passthru ...passed 00:18:32.180 Test: blockdev copy ...passed 00:18:32.180 00:18:32.180 Run Summary: Type Total Ran Passed Failed Inactive 00:18:32.180 suites 1 1 n/a 0 0 00:18:32.180 tests 23 23 23 0 0 00:18:32.180 asserts 152 152 152 0 n/a 00:18:32.180 00:18:32.180 Elapsed time = 1.390 seconds 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:32.437 rmmod nvme_tcp 00:18:32.437 rmmod nvme_fabrics 00:18:32.437 rmmod nvme_keyring 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:32.437 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3425769 ']' 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3425769 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 3425769 ']' 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 3425769 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3425769 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3425769' 00:18:32.438 killing process with pid 3425769 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 3425769 00:18:32.438 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 3425769 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:33.004 18:13:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:34.908 00:18:34.908 real 0m10.455s 00:18:34.908 user 0m13.929s 00:18:34.908 sys 0m4.974s 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.908 ************************************ 00:18:34.908 END TEST nvmf_bdevio_no_huge 00:18:34.908 ************************************ 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.908 ************************************ 00:18:34.908 START TEST nvmf_tls 00:18:34.908 ************************************ 00:18:34.908 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:34.908 * Looking for test storage... 00:18:35.167 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:35.167 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:35.167 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:35.167 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.167 18:13:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:35.167 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:35.168 18:13:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:40.432 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:40.432 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:40.432 Found net devices under 0000:86:00.0: cvl_0_0 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:40.432 Found net devices under 0000:86:00.1: cvl_0_1 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:40.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:40.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:18:40.432 00:18:40.432 --- 10.0.0.2 ping statistics --- 00:18:40.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.432 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:40.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:40.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:18:40.432 00:18:40.432 --- 10.0.0.1 ping statistics --- 00:18:40.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:40.432 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:40.432 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:40.433 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:40.433 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3429749 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3429749 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3429749 ']' 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:40.691 18:13:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.691 [2024-07-24 18:13:33.586615] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:18:40.691 [2024-07-24 18:13:33.586659] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.691 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.691 [2024-07-24 18:13:33.644436] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.691 [2024-07-24 18:13:33.715984] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.691 [2024-07-24 18:13:33.716024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.691 [2024-07-24 18:13:33.716031] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.691 [2024-07-24 18:13:33.716036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.691 [2024-07-24 18:13:33.716041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.691 [2024-07-24 18:13:33.716060] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.624 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:41.624 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:41.624 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:41.625 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.625 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:41.625 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.625 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:41.625 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:41.625 true 00:18:41.625 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:41.625 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:41.882 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:41.882 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:41.882 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:41.882 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:41.882 18:13:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:42.140 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:42.140 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:42.141 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:42.400 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:42.400 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:42.400 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:42.400 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:42.400 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:42.400 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:42.659 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:42.659 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:42.659 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:42.917 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:42.917 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:42.917 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:42.917 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:42.917 18:13:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:43.175 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.38E7TpkiFh 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.5XRBNSiM7K 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.38E7TpkiFh 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.5XRBNSiM7K 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:43.434 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:43.691 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.38E7TpkiFh 00:18:43.691 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.38E7TpkiFh 00:18:43.691 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:43.949 [2024-07-24 18:13:36.888581] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.949 18:13:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:44.207 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:44.207 [2024-07-24 18:13:37.209402] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.207 [2024-07-24 18:13:37.209609] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.207 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:44.465 malloc0 00:18:44.465 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:44.723 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.38E7TpkiFh 00:18:44.723 [2024-07-24 18:13:37.698811] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:44.723 18:13:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.38E7TpkiFh 00:18:44.723 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.932 Initializing NVMe Controllers 00:18:56.932 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:56.932 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:56.932 Initialization complete. Launching workers. 00:18:56.932 ======================================================== 00:18:56.932 Latency(us) 00:18:56.932 Device Information : IOPS MiB/s Average min max 00:18:56.932 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16951.96 66.22 3775.75 697.49 5760.62 00:18:56.932 ======================================================== 00:18:56.932 Total : 16951.96 66.22 3775.75 697.49 5760.62 00:18:56.932 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.38E7TpkiFh 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.38E7TpkiFh' 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3432113 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3432113 /var/tmp/bdevperf.sock 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3432113 ']' 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:56.932 18:13:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:56.932 [2024-07-24 18:13:47.858771] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:18:56.932 [2024-07-24 18:13:47.858821] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3432113 ] 00:18:56.932 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.932 [2024-07-24 18:13:47.907691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.932 [2024-07-24 18:13:47.986501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.932 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:56.932 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:56.932 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.38E7TpkiFh 00:18:56.932 [2024-07-24 18:13:48.819757] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.932 [2024-07-24 18:13:48.819823] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:56.932 TLSTESTn1 00:18:56.932 18:13:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:56.932 Running I/O for 10 seconds... 00:19:06.962 00:19:06.962 Latency(us) 00:19:06.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.962 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:06.962 Verification LBA range: start 0x0 length 0x2000 00:19:06.962 TLSTESTn1 : 10.01 5732.29 22.39 0.00 0.00 22295.09 4837.18 23468.13 00:19:06.962 =================================================================================================================== 00:19:06.962 Total : 5732.29 22.39 0.00 0.00 22295.09 4837.18 23468.13 00:19:06.962 0 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3432113 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3432113 ']' 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3432113 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3432113 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3432113' 00:19:06.962 killing process with pid 3432113 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3432113 00:19:06.962 Received shutdown signal, test time was about 10.000000 seconds 00:19:06.962 00:19:06.962 Latency(us) 00:19:06.962 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.962 =================================================================================================================== 00:19:06.962 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:06.962 [2024-07-24 18:13:59.083192] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3432113 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XRBNSiM7K 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XRBNSiM7K 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5XRBNSiM7K 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.5XRBNSiM7K' 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3433954 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3433954 /var/tmp/bdevperf.sock 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3433954 ']' 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:06.962 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.963 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:06.963 18:13:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.963 [2024-07-24 18:13:59.310471] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:06.963 [2024-07-24 18:13:59.310529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3433954 ] 00:19:06.963 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.963 [2024-07-24 18:13:59.359454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.963 [2024-07-24 18:13:59.438482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.221 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:07.221 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:07.221 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.5XRBNSiM7K 00:19:07.221 [2024-07-24 18:14:00.287879] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.221 [2024-07-24 18:14:00.287945] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:07.221 [2024-07-24 18:14:00.295760] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:07.221 [2024-07-24 18:14:00.296088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8a570 (107): Transport endpoint is not connected 00:19:07.221 [2024-07-24 18:14:00.297082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd8a570 (9): Bad file descriptor 00:19:07.221 [2024-07-24 18:14:00.298083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:07.221 [2024-07-24 18:14:00.298093] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:07.221 [2024-07-24 18:14:00.298101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:07.221 request: 00:19:07.221 { 00:19:07.221 "name": "TLSTEST", 00:19:07.221 "trtype": "tcp", 00:19:07.221 "traddr": "10.0.0.2", 00:19:07.221 "adrfam": "ipv4", 00:19:07.221 "trsvcid": "4420", 00:19:07.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:07.221 "prchk_reftag": false, 00:19:07.221 "prchk_guard": false, 00:19:07.221 "hdgst": false, 00:19:07.221 "ddgst": false, 00:19:07.221 "psk": "/tmp/tmp.5XRBNSiM7K", 00:19:07.221 "method": "bdev_nvme_attach_controller", 00:19:07.221 "req_id": 1 00:19:07.221 } 00:19:07.221 Got JSON-RPC error response 00:19:07.221 response: 00:19:07.221 { 00:19:07.221 "code": -5, 00:19:07.221 "message": "Input/output error" 00:19:07.221 } 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3433954 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3433954 ']' 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3433954 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3433954 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3433954' 00:19:07.480 killing process with pid 3433954 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3433954 00:19:07.480 Received shutdown signal, test time was about 10.000000 seconds 00:19:07.480 00:19:07.480 Latency(us) 00:19:07.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:07.480 =================================================================================================================== 00:19:07.480 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:07.480 [2024-07-24 18:14:00.359044] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3433954 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.38E7TpkiFh 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.38E7TpkiFh 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.38E7TpkiFh 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.38E7TpkiFh' 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3434195 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3434195 /var/tmp/bdevperf.sock 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3434195 ']' 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:07.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:07.480 18:14:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:07.739 [2024-07-24 18:14:00.579865] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:07.739 [2024-07-24 18:14:00.579913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434195 ] 00:19:07.739 EAL: No free 2048 kB hugepages reported on node 1 00:19:07.739 [2024-07-24 18:14:00.628938] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.739 [2024-07-24 18:14:00.707414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:08.308 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:08.308 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:08.308 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.38E7TpkiFh 00:19:08.566 [2024-07-24 18:14:01.544770] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:08.566 [2024-07-24 18:14:01.544836] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:08.566 [2024-07-24 18:14:01.549223] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:08.566 [2024-07-24 18:14:01.549247] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:08.566 [2024-07-24 18:14:01.549273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:08.566 [2024-07-24 18:14:01.549907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f570 (107): Transport endpoint is not connected 00:19:08.566 [2024-07-24 18:14:01.550898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf5f570 (9): Bad file descriptor 00:19:08.566 [2024-07-24 18:14:01.551899] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:08.566 [2024-07-24 18:14:01.551912] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:08.566 [2024-07-24 18:14:01.551920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:08.566 request: 00:19:08.566 { 00:19:08.566 "name": "TLSTEST", 00:19:08.566 "trtype": "tcp", 00:19:08.566 "traddr": "10.0.0.2", 00:19:08.566 "adrfam": "ipv4", 00:19:08.566 "trsvcid": "4420", 00:19:08.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.566 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:08.566 "prchk_reftag": false, 00:19:08.566 "prchk_guard": false, 00:19:08.566 "hdgst": false, 00:19:08.566 "ddgst": false, 00:19:08.566 "psk": "/tmp/tmp.38E7TpkiFh", 00:19:08.566 "method": "bdev_nvme_attach_controller", 00:19:08.566 "req_id": 1 00:19:08.566 } 00:19:08.566 Got JSON-RPC error response 00:19:08.566 response: 00:19:08.566 { 00:19:08.566 "code": -5, 00:19:08.566 "message": "Input/output error" 00:19:08.566 } 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3434195 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3434195 ']' 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3434195 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3434195 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:08.566 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:08.567 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3434195' 00:19:08.567 killing process with pid 3434195 00:19:08.567 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3434195 00:19:08.567 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.567 00:19:08.567 Latency(us) 00:19:08.567 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.567 =================================================================================================================== 00:19:08.567 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:08.567 [2024-07-24 18:14:01.625596] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:08.567 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3434195 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.38E7TpkiFh 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.38E7TpkiFh 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.38E7TpkiFh 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.38E7TpkiFh' 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3434415 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3434415 /var/tmp/bdevperf.sock 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3434415 ']' 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:08.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:08.826 18:14:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.826 [2024-07-24 18:14:01.847892] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:08.826 [2024-07-24 18:14:01.847939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434415 ] 00:19:08.826 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.826 [2024-07-24 18:14:01.897392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.085 [2024-07-24 18:14:01.978076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.653 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:09.653 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:09.653 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.38E7TpkiFh 00:19:09.913 [2024-07-24 18:14:02.788535] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:09.913 [2024-07-24 18:14:02.788603] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:09.913 [2024-07-24 18:14:02.793086] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:09.913 [2024-07-24 18:14:02.793109] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:09.913 [2024-07-24 18:14:02.793134] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:09.913 [2024-07-24 18:14:02.793793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f4570 (107): Transport endpoint is not connected 00:19:09.913 [2024-07-24 18:14:02.794784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9f4570 (9): Bad file descriptor 00:19:09.913 [2024-07-24 18:14:02.795785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:09.913 [2024-07-24 18:14:02.795795] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:09.913 [2024-07-24 18:14:02.795803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:09.913 request: 00:19:09.913 { 00:19:09.913 "name": "TLSTEST", 00:19:09.913 "trtype": "tcp", 00:19:09.913 "traddr": "10.0.0.2", 00:19:09.913 "adrfam": "ipv4", 00:19:09.913 "trsvcid": "4420", 00:19:09.913 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:09.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.913 "prchk_reftag": false, 00:19:09.913 "prchk_guard": false, 00:19:09.913 "hdgst": false, 00:19:09.913 "ddgst": false, 00:19:09.913 "psk": "/tmp/tmp.38E7TpkiFh", 00:19:09.913 "method": "bdev_nvme_attach_controller", 00:19:09.913 "req_id": 1 00:19:09.913 } 00:19:09.913 Got JSON-RPC error response 00:19:09.913 response: 00:19:09.913 { 00:19:09.913 "code": -5, 00:19:09.913 "message": "Input/output error" 00:19:09.913 } 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3434415 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3434415 ']' 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3434415 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3434415 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3434415' 00:19:09.913 killing process with pid 3434415 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3434415 00:19:09.913 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.913 00:19:09.913 Latency(us) 00:19:09.913 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.913 =================================================================================================================== 00:19:09.913 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:09.913 [2024-07-24 18:14:02.854509] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:09.913 18:14:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3434415 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3434549 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3434549 /var/tmp/bdevperf.sock 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3434549 ']' 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.172 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.172 [2024-07-24 18:14:03.075237] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:10.172 [2024-07-24 18:14:03.075286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3434549 ] 00:19:10.172 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.173 [2024-07-24 18:14:03.124508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.173 [2024-07-24 18:14:03.205208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.117 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.117 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:11.117 18:14:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:11.117 [2024-07-24 18:14:04.035198] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:11.117 [2024-07-24 18:14:04.037465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23faaf0 (9): Bad file descriptor 00:19:11.117 [2024-07-24 18:14:04.038464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.117 [2024-07-24 18:14:04.038473] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:11.117 [2024-07-24 18:14:04.038481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.117 request: 00:19:11.117 { 00:19:11.117 "name": "TLSTEST", 00:19:11.117 "trtype": "tcp", 00:19:11.117 "traddr": "10.0.0.2", 00:19:11.117 "adrfam": "ipv4", 00:19:11.117 "trsvcid": "4420", 00:19:11.117 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.117 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.117 "prchk_reftag": false, 00:19:11.117 "prchk_guard": false, 00:19:11.117 "hdgst": false, 00:19:11.117 "ddgst": false, 00:19:11.117 "method": "bdev_nvme_attach_controller", 00:19:11.117 "req_id": 1 00:19:11.117 } 00:19:11.117 Got JSON-RPC error response 00:19:11.117 response: 00:19:11.117 { 00:19:11.117 "code": -5, 00:19:11.117 "message": "Input/output error" 00:19:11.117 } 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3434549 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3434549 ']' 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3434549 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3434549 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3434549' 00:19:11.117 killing process with pid 3434549 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3434549 00:19:11.117 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.117 00:19:11.117 Latency(us) 00:19:11.117 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.117 =================================================================================================================== 00:19:11.117 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.117 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3434549 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3429749 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3429749 ']' 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3429749 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3429749 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3429749' 00:19:11.376 killing process with pid 3429749 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3429749 00:19:11.376 [2024-07-24 18:14:04.328986] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:11.376 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3429749 00:19:11.635 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:11.635 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:11.635 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:11.635 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.YA65rcPQ6V 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.YA65rcPQ6V 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3434877 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3434877 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3434877 ']' 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.636 18:14:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.636 [2024-07-24 18:14:04.624010] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:11.636 [2024-07-24 18:14:04.624055] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.636 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.636 [2024-07-24 18:14:04.680544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.895 [2024-07-24 18:14:04.757572] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.895 [2024-07-24 18:14:04.757608] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.895 [2024-07-24 18:14:04.757616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.895 [2024-07-24 18:14:04.757622] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.895 [2024-07-24 18:14:04.757627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.895 [2024-07-24 18:14:04.757645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.YA65rcPQ6V 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YA65rcPQ6V 00:19:12.462 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:12.721 [2024-07-24 18:14:05.607039] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.721 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:12.721 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:12.980 [2024-07-24 18:14:05.947930] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:12.980 [2024-07-24 18:14:05.948112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.980 18:14:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:13.239 malloc0 00:19:13.239 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:13.239 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YA65rcPQ6V 00:19:13.498 [2024-07-24 18:14:06.457122] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YA65rcPQ6V 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YA65rcPQ6V' 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3435178 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3435178 /var/tmp/bdevperf.sock 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3435178 ']' 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:13.498 18:14:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.498 [2024-07-24 18:14:06.509508] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:13.498 [2024-07-24 18:14:06.509554] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435178 ] 00:19:13.498 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.498 [2024-07-24 18:14:06.558574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.757 [2024-07-24 18:14:06.633750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.325 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:14.325 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:14.325 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YA65rcPQ6V 00:19:14.584 [2024-07-24 18:14:07.483351] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.584 [2024-07-24 18:14:07.483423] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:14.584 TLSTESTn1 00:19:14.584 18:14:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:14.584 Running I/O for 10 seconds... 00:19:26.789 00:19:26.789 Latency(us) 00:19:26.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.789 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:26.789 Verification LBA range: start 0x0 length 0x2000 00:19:26.789 TLSTESTn1 : 10.03 5277.47 20.62 0.00 0.00 24215.64 6553.60 54675.75 00:19:26.789 =================================================================================================================== 00:19:26.789 Total : 5277.47 20.62 0.00 0.00 24215.64 6553.60 54675.75 00:19:26.789 0 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3435178 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3435178 ']' 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3435178 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3435178 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3435178' 00:19:26.789 killing process with pid 3435178 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3435178 00:19:26.789 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.789 00:19:26.789 Latency(us) 00:19:26.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.789 =================================================================================================================== 00:19:26.789 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:26.789 [2024-07-24 18:14:17.773771] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3435178 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.YA65rcPQ6V 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YA65rcPQ6V 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YA65rcPQ6V 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YA65rcPQ6V 00:19:26.789 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YA65rcPQ6V' 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3437023 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3437023 /var/tmp/bdevperf.sock 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3437023 ']' 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.790 18:14:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.790 [2024-07-24 18:14:18.008537] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:26.790 [2024-07-24 18:14:18.008582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3437023 ] 00:19:26.790 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.790 [2024-07-24 18:14:18.056691] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.790 [2024-07-24 18:14:18.124496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YA65rcPQ6V 00:19:26.790 [2024-07-24 18:14:18.954755] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.790 [2024-07-24 18:14:18.954805] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:26.790 [2024-07-24 18:14:18.954812] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.YA65rcPQ6V 00:19:26.790 request: 00:19:26.790 { 00:19:26.790 "name": "TLSTEST", 00:19:26.790 "trtype": "tcp", 00:19:26.790 "traddr": "10.0.0.2", 00:19:26.790 "adrfam": "ipv4", 00:19:26.790 "trsvcid": "4420", 00:19:26.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:26.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:26.790 "prchk_reftag": false, 00:19:26.790 "prchk_guard": false, 00:19:26.790 "hdgst": false, 00:19:26.790 "ddgst": false, 00:19:26.790 "psk": "/tmp/tmp.YA65rcPQ6V", 00:19:26.790 "method": "bdev_nvme_attach_controller", 00:19:26.790 "req_id": 1 00:19:26.790 } 00:19:26.790 Got JSON-RPC error response 00:19:26.790 response: 00:19:26.790 { 00:19:26.790 "code": -1, 00:19:26.790 "message": "Operation not permitted" 00:19:26.790 } 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3437023 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3437023 ']' 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3437023 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.790 18:14:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3437023 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3437023' 00:19:26.790 killing process with pid 3437023 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3437023 00:19:26.790 Received shutdown signal, test time was about 10.000000 seconds 00:19:26.790 00:19:26.790 Latency(us) 00:19:26.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:26.790 =================================================================================================================== 00:19:26.790 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3437023 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3434877 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3434877 ']' 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3434877 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3434877 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3434877' 00:19:26.790 killing process with pid 3434877 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3434877 00:19:26.790 [2024-07-24 18:14:19.236566] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3434877 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3437263 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3437263 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3437263 ']' 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:26.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:26.790 18:14:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.790 [2024-07-24 18:14:19.486305] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:26.790 [2024-07-24 18:14:19.486350] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:26.790 EAL: No free 2048 kB hugepages reported on node 1 00:19:26.790 [2024-07-24 18:14:19.541785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.790 [2024-07-24 18:14:19.618242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:26.790 [2024-07-24 18:14:19.618280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:26.790 [2024-07-24 18:14:19.618287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:26.790 [2024-07-24 18:14:19.618292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:26.790 [2024-07-24 18:14:19.618297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:26.790 [2024-07-24 18:14:19.618315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.YA65rcPQ6V 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.YA65rcPQ6V 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.YA65rcPQ6V 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YA65rcPQ6V 00:19:27.358 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.617 [2024-07-24 18:14:20.468999] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.617 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.617 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.876 [2024-07-24 18:14:20.797833] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.876 [2024-07-24 18:14:20.798014] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.876 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:28.134 malloc0 00:19:28.134 18:14:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:28.134 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YA65rcPQ6V 00:19:28.393 [2024-07-24 18:14:21.283124] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:28.393 [2024-07-24 18:14:21.283148] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:28.393 [2024-07-24 18:14:21.283168] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:28.393 request: 00:19:28.393 { 00:19:28.393 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.393 "host": "nqn.2016-06.io.spdk:host1", 00:19:28.393 "psk": "/tmp/tmp.YA65rcPQ6V", 00:19:28.393 "method": "nvmf_subsystem_add_host", 00:19:28.393 "req_id": 1 00:19:28.393 } 00:19:28.393 Got JSON-RPC error response 00:19:28.393 response: 00:19:28.393 { 00:19:28.393 "code": -32603, 00:19:28.393 "message": "Internal error" 00:19:28.393 } 00:19:28.393 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3437263 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3437263 ']' 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3437263 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3437263 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3437263' 00:19:28.394 killing process with pid 3437263 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3437263 00:19:28.394 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3437263 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.YA65rcPQ6V 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3437702 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3437702 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3437702 ']' 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.653 18:14:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.653 [2024-07-24 18:14:21.603400] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:28.653 [2024-07-24 18:14:21.603448] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.653 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.653 [2024-07-24 18:14:21.661284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.912 [2024-07-24 18:14:21.739382] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.912 [2024-07-24 18:14:21.739415] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.912 [2024-07-24 18:14:21.739422] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.912 [2024-07-24 18:14:21.739428] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.912 [2024-07-24 18:14:21.739432] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.912 [2024-07-24 18:14:21.739447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.YA65rcPQ6V 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YA65rcPQ6V 00:19:29.479 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.738 [2024-07-24 18:14:22.590115] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.738 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.738 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.997 [2024-07-24 18:14:22.910943] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.997 [2024-07-24 18:14:22.911137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.997 18:14:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.256 malloc0 00:19:30.256 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.256 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YA65rcPQ6V 00:19:30.515 [2024-07-24 18:14:23.412269] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3438007 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3438007 /var/tmp/bdevperf.sock 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3438007 ']' 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.515 18:14:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.515 [2024-07-24 18:14:23.463038] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:30.515 [2024-07-24 18:14:23.463081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3438007 ] 00:19:30.515 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.515 [2024-07-24 18:14:23.511196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.515 [2024-07-24 18:14:23.582945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.451 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.451 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:31.451 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YA65rcPQ6V 00:19:31.451 [2024-07-24 18:14:24.417151] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.451 [2024-07-24 18:14:24.417237] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:31.451 TLSTESTn1 00:19:31.451 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:31.710 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:31.710 "subsystems": [ 00:19:31.710 { 00:19:31.710 "subsystem": "keyring", 00:19:31.710 "config": [] 00:19:31.710 }, 00:19:31.710 { 00:19:31.710 "subsystem": "iobuf", 00:19:31.710 "config": [ 00:19:31.711 { 00:19:31.711 "method": "iobuf_set_options", 00:19:31.711 "params": { 00:19:31.711 "small_pool_count": 8192, 00:19:31.711 "large_pool_count": 1024, 00:19:31.711 "small_bufsize": 8192, 00:19:31.711 "large_bufsize": 135168 00:19:31.711 } 00:19:31.711 } 00:19:31.711 ] 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "subsystem": "sock", 00:19:31.711 "config": [ 00:19:31.711 { 00:19:31.711 "method": "sock_set_default_impl", 00:19:31.711 "params": { 00:19:31.711 "impl_name": "posix" 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "sock_impl_set_options", 00:19:31.711 "params": { 00:19:31.711 "impl_name": "ssl", 00:19:31.711 "recv_buf_size": 4096, 00:19:31.711 "send_buf_size": 4096, 00:19:31.711 "enable_recv_pipe": true, 00:19:31.711 "enable_quickack": false, 00:19:31.711 "enable_placement_id": 0, 00:19:31.711 "enable_zerocopy_send_server": true, 00:19:31.711 "enable_zerocopy_send_client": false, 00:19:31.711 "zerocopy_threshold": 0, 00:19:31.711 "tls_version": 0, 00:19:31.711 "enable_ktls": false 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "sock_impl_set_options", 00:19:31.711 "params": { 00:19:31.711 "impl_name": "posix", 00:19:31.711 "recv_buf_size": 2097152, 00:19:31.711 "send_buf_size": 2097152, 00:19:31.711 "enable_recv_pipe": true, 00:19:31.711 "enable_quickack": false, 00:19:31.711 "enable_placement_id": 0, 00:19:31.711 "enable_zerocopy_send_server": true, 00:19:31.711 "enable_zerocopy_send_client": false, 00:19:31.711 "zerocopy_threshold": 0, 00:19:31.711 "tls_version": 0, 00:19:31.711 "enable_ktls": false 00:19:31.711 } 00:19:31.711 } 00:19:31.711 ] 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "subsystem": "vmd", 00:19:31.711 "config": [] 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "subsystem": "accel", 00:19:31.711 "config": [ 00:19:31.711 { 00:19:31.711 "method": "accel_set_options", 00:19:31.711 "params": { 00:19:31.711 "small_cache_size": 128, 00:19:31.711 "large_cache_size": 16, 00:19:31.711 "task_count": 2048, 00:19:31.711 "sequence_count": 2048, 00:19:31.711 "buf_count": 2048 00:19:31.711 } 00:19:31.711 } 00:19:31.711 ] 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "subsystem": "bdev", 00:19:31.711 "config": [ 00:19:31.711 { 00:19:31.711 "method": "bdev_set_options", 00:19:31.711 "params": { 00:19:31.711 "bdev_io_pool_size": 65535, 00:19:31.711 "bdev_io_cache_size": 256, 00:19:31.711 "bdev_auto_examine": true, 00:19:31.711 "iobuf_small_cache_size": 128, 00:19:31.711 "iobuf_large_cache_size": 16 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "bdev_raid_set_options", 00:19:31.711 "params": { 00:19:31.711 "process_window_size_kb": 1024, 00:19:31.711 "process_max_bandwidth_mb_sec": 0 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "bdev_iscsi_set_options", 00:19:31.711 "params": { 00:19:31.711 "timeout_sec": 30 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "bdev_nvme_set_options", 00:19:31.711 "params": { 00:19:31.711 "action_on_timeout": "none", 00:19:31.711 "timeout_us": 0, 00:19:31.711 "timeout_admin_us": 0, 00:19:31.711 "keep_alive_timeout_ms": 10000, 00:19:31.711 "arbitration_burst": 0, 00:19:31.711 "low_priority_weight": 0, 00:19:31.711 "medium_priority_weight": 0, 00:19:31.711 "high_priority_weight": 0, 00:19:31.711 "nvme_adminq_poll_period_us": 10000, 00:19:31.711 "nvme_ioq_poll_period_us": 0, 00:19:31.711 "io_queue_requests": 0, 00:19:31.711 "delay_cmd_submit": true, 00:19:31.711 "transport_retry_count": 4, 00:19:31.711 "bdev_retry_count": 3, 00:19:31.711 "transport_ack_timeout": 0, 00:19:31.711 "ctrlr_loss_timeout_sec": 0, 00:19:31.711 "reconnect_delay_sec": 0, 00:19:31.711 "fast_io_fail_timeout_sec": 0, 00:19:31.711 "disable_auto_failback": false, 00:19:31.711 "generate_uuids": false, 00:19:31.711 "transport_tos": 0, 00:19:31.711 "nvme_error_stat": false, 00:19:31.711 "rdma_srq_size": 0, 00:19:31.711 "io_path_stat": false, 00:19:31.711 "allow_accel_sequence": false, 00:19:31.711 "rdma_max_cq_size": 0, 00:19:31.711 "rdma_cm_event_timeout_ms": 0, 00:19:31.711 "dhchap_digests": [ 00:19:31.711 "sha256", 00:19:31.711 "sha384", 00:19:31.711 "sha512" 00:19:31.711 ], 00:19:31.711 "dhchap_dhgroups": [ 00:19:31.711 "null", 00:19:31.711 "ffdhe2048", 00:19:31.711 "ffdhe3072", 00:19:31.711 "ffdhe4096", 00:19:31.711 "ffdhe6144", 00:19:31.711 "ffdhe8192" 00:19:31.711 ] 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "bdev_nvme_set_hotplug", 00:19:31.711 "params": { 00:19:31.711 "period_us": 100000, 00:19:31.711 "enable": false 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "bdev_malloc_create", 00:19:31.711 "params": { 00:19:31.711 "name": "malloc0", 00:19:31.711 "num_blocks": 8192, 00:19:31.711 "block_size": 4096, 00:19:31.711 "physical_block_size": 4096, 00:19:31.711 "uuid": "e8352dda-2d2a-4a0e-aeb3-4d4cb33efb2f", 00:19:31.711 "optimal_io_boundary": 0, 00:19:31.711 "md_size": 0, 00:19:31.711 "dif_type": 0, 00:19:31.711 "dif_is_head_of_md": false, 00:19:31.711 "dif_pi_format": 0 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "bdev_wait_for_examine" 00:19:31.711 } 00:19:31.711 ] 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "subsystem": "nbd", 00:19:31.711 "config": [] 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "subsystem": "scheduler", 00:19:31.711 "config": [ 00:19:31.711 { 00:19:31.711 "method": "framework_set_scheduler", 00:19:31.711 "params": { 00:19:31.711 "name": "static" 00:19:31.711 } 00:19:31.711 } 00:19:31.711 ] 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "subsystem": "nvmf", 00:19:31.711 "config": [ 00:19:31.711 { 00:19:31.711 "method": "nvmf_set_config", 00:19:31.711 "params": { 00:19:31.711 "discovery_filter": "match_any", 00:19:31.711 "admin_cmd_passthru": { 00:19:31.711 "identify_ctrlr": false 00:19:31.711 } 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "nvmf_set_max_subsystems", 00:19:31.711 "params": { 00:19:31.711 "max_subsystems": 1024 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "nvmf_set_crdt", 00:19:31.711 "params": { 00:19:31.711 "crdt1": 0, 00:19:31.711 "crdt2": 0, 00:19:31.711 "crdt3": 0 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "nvmf_create_transport", 00:19:31.711 "params": { 00:19:31.711 "trtype": "TCP", 00:19:31.711 "max_queue_depth": 128, 00:19:31.711 "max_io_qpairs_per_ctrlr": 127, 00:19:31.711 "in_capsule_data_size": 4096, 00:19:31.711 "max_io_size": 131072, 00:19:31.711 "io_unit_size": 131072, 00:19:31.711 "max_aq_depth": 128, 00:19:31.711 "num_shared_buffers": 511, 00:19:31.711 "buf_cache_size": 4294967295, 00:19:31.711 "dif_insert_or_strip": false, 00:19:31.711 "zcopy": false, 00:19:31.711 "c2h_success": false, 00:19:31.711 "sock_priority": 0, 00:19:31.711 "abort_timeout_sec": 1, 00:19:31.711 "ack_timeout": 0, 00:19:31.711 "data_wr_pool_size": 0 00:19:31.711 } 00:19:31.711 }, 00:19:31.711 { 00:19:31.711 "method": "nvmf_create_subsystem", 00:19:31.711 "params": { 00:19:31.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.711 "allow_any_host": false, 00:19:31.711 "serial_number": "SPDK00000000000001", 00:19:31.712 "model_number": "SPDK bdev Controller", 00:19:31.712 "max_namespaces": 10, 00:19:31.712 "min_cntlid": 1, 00:19:31.712 "max_cntlid": 65519, 00:19:31.712 "ana_reporting": false 00:19:31.712 } 00:19:31.712 }, 00:19:31.712 { 00:19:31.712 "method": "nvmf_subsystem_add_host", 00:19:31.712 "params": { 00:19:31.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.712 "host": "nqn.2016-06.io.spdk:host1", 00:19:31.712 "psk": "/tmp/tmp.YA65rcPQ6V" 00:19:31.712 } 00:19:31.712 }, 00:19:31.712 { 00:19:31.712 "method": "nvmf_subsystem_add_ns", 00:19:31.712 "params": { 00:19:31.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.712 "namespace": { 00:19:31.712 "nsid": 1, 00:19:31.712 "bdev_name": "malloc0", 00:19:31.712 "nguid": "E8352DDA2D2A4A0EAEB34D4CB33EFB2F", 00:19:31.712 "uuid": "e8352dda-2d2a-4a0e-aeb3-4d4cb33efb2f", 00:19:31.712 "no_auto_visible": false 00:19:31.712 } 00:19:31.712 } 00:19:31.712 }, 00:19:31.712 { 00:19:31.712 "method": "nvmf_subsystem_add_listener", 00:19:31.712 "params": { 00:19:31.712 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.712 "listen_address": { 00:19:31.712 "trtype": "TCP", 00:19:31.712 "adrfam": "IPv4", 00:19:31.712 "traddr": "10.0.0.2", 00:19:31.712 "trsvcid": "4420" 00:19:31.712 }, 00:19:31.712 "secure_channel": true 00:19:31.712 } 00:19:31.712 } 00:19:31.712 ] 00:19:31.712 } 00:19:31.712 ] 00:19:31.712 }' 00:19:31.712 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:31.970 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:31.970 "subsystems": [ 00:19:31.970 { 00:19:31.970 "subsystem": "keyring", 00:19:31.970 "config": [] 00:19:31.970 }, 00:19:31.970 { 00:19:31.970 "subsystem": "iobuf", 00:19:31.970 "config": [ 00:19:31.970 { 00:19:31.970 "method": "iobuf_set_options", 00:19:31.970 "params": { 00:19:31.970 "small_pool_count": 8192, 00:19:31.970 "large_pool_count": 1024, 00:19:31.970 "small_bufsize": 8192, 00:19:31.970 "large_bufsize": 135168 00:19:31.970 } 00:19:31.970 } 00:19:31.970 ] 00:19:31.970 }, 00:19:31.970 { 00:19:31.970 "subsystem": "sock", 00:19:31.970 "config": [ 00:19:31.970 { 00:19:31.970 "method": "sock_set_default_impl", 00:19:31.970 "params": { 00:19:31.970 "impl_name": "posix" 00:19:31.970 } 00:19:31.970 }, 00:19:31.970 { 00:19:31.970 "method": "sock_impl_set_options", 00:19:31.970 "params": { 00:19:31.970 "impl_name": "ssl", 00:19:31.970 "recv_buf_size": 4096, 00:19:31.970 "send_buf_size": 4096, 00:19:31.970 "enable_recv_pipe": true, 00:19:31.970 "enable_quickack": false, 00:19:31.970 "enable_placement_id": 0, 00:19:31.970 "enable_zerocopy_send_server": true, 00:19:31.971 "enable_zerocopy_send_client": false, 00:19:31.971 "zerocopy_threshold": 0, 00:19:31.971 "tls_version": 0, 00:19:31.971 "enable_ktls": false 00:19:31.971 } 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "method": "sock_impl_set_options", 00:19:31.971 "params": { 00:19:31.971 "impl_name": "posix", 00:19:31.971 "recv_buf_size": 2097152, 00:19:31.971 "send_buf_size": 2097152, 00:19:31.971 "enable_recv_pipe": true, 00:19:31.971 "enable_quickack": false, 00:19:31.971 "enable_placement_id": 0, 00:19:31.971 "enable_zerocopy_send_server": true, 00:19:31.971 "enable_zerocopy_send_client": false, 00:19:31.971 "zerocopy_threshold": 0, 00:19:31.971 "tls_version": 0, 00:19:31.971 "enable_ktls": false 00:19:31.971 } 00:19:31.971 } 00:19:31.971 ] 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "subsystem": "vmd", 00:19:31.971 "config": [] 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "subsystem": "accel", 00:19:31.971 "config": [ 00:19:31.971 { 00:19:31.971 "method": "accel_set_options", 00:19:31.971 "params": { 00:19:31.971 "small_cache_size": 128, 00:19:31.971 "large_cache_size": 16, 00:19:31.971 "task_count": 2048, 00:19:31.971 "sequence_count": 2048, 00:19:31.971 "buf_count": 2048 00:19:31.971 } 00:19:31.971 } 00:19:31.971 ] 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "subsystem": "bdev", 00:19:31.971 "config": [ 00:19:31.971 { 00:19:31.971 "method": "bdev_set_options", 00:19:31.971 "params": { 00:19:31.971 "bdev_io_pool_size": 65535, 00:19:31.971 "bdev_io_cache_size": 256, 00:19:31.971 "bdev_auto_examine": true, 00:19:31.971 "iobuf_small_cache_size": 128, 00:19:31.971 "iobuf_large_cache_size": 16 00:19:31.971 } 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "method": "bdev_raid_set_options", 00:19:31.971 "params": { 00:19:31.971 "process_window_size_kb": 1024, 00:19:31.971 "process_max_bandwidth_mb_sec": 0 00:19:31.971 } 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "method": "bdev_iscsi_set_options", 00:19:31.971 "params": { 00:19:31.971 "timeout_sec": 30 00:19:31.971 } 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "method": "bdev_nvme_set_options", 00:19:31.971 "params": { 00:19:31.971 "action_on_timeout": "none", 00:19:31.971 "timeout_us": 0, 00:19:31.971 "timeout_admin_us": 0, 00:19:31.971 "keep_alive_timeout_ms": 10000, 00:19:31.971 "arbitration_burst": 0, 00:19:31.971 "low_priority_weight": 0, 00:19:31.971 "medium_priority_weight": 0, 00:19:31.971 "high_priority_weight": 0, 00:19:31.971 "nvme_adminq_poll_period_us": 10000, 00:19:31.971 "nvme_ioq_poll_period_us": 0, 00:19:31.971 "io_queue_requests": 512, 00:19:31.971 "delay_cmd_submit": true, 00:19:31.971 "transport_retry_count": 4, 00:19:31.971 "bdev_retry_count": 3, 00:19:31.971 "transport_ack_timeout": 0, 00:19:31.971 "ctrlr_loss_timeout_sec": 0, 00:19:31.971 "reconnect_delay_sec": 0, 00:19:31.971 "fast_io_fail_timeout_sec": 0, 00:19:31.971 "disable_auto_failback": false, 00:19:31.971 "generate_uuids": false, 00:19:31.971 "transport_tos": 0, 00:19:31.971 "nvme_error_stat": false, 00:19:31.971 "rdma_srq_size": 0, 00:19:31.971 "io_path_stat": false, 00:19:31.971 "allow_accel_sequence": false, 00:19:31.971 "rdma_max_cq_size": 0, 00:19:31.971 "rdma_cm_event_timeout_ms": 0, 00:19:31.971 "dhchap_digests": [ 00:19:31.971 "sha256", 00:19:31.971 "sha384", 00:19:31.971 "sha512" 00:19:31.971 ], 00:19:31.971 "dhchap_dhgroups": [ 00:19:31.971 "null", 00:19:31.971 "ffdhe2048", 00:19:31.971 "ffdhe3072", 00:19:31.971 "ffdhe4096", 00:19:31.971 "ffdhe6144", 00:19:31.971 "ffdhe8192" 00:19:31.971 ] 00:19:31.971 } 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "method": "bdev_nvme_attach_controller", 00:19:31.971 "params": { 00:19:31.971 "name": "TLSTEST", 00:19:31.971 "trtype": "TCP", 00:19:31.971 "adrfam": "IPv4", 00:19:31.971 "traddr": "10.0.0.2", 00:19:31.971 "trsvcid": "4420", 00:19:31.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.971 "prchk_reftag": false, 00:19:31.971 "prchk_guard": false, 00:19:31.971 "ctrlr_loss_timeout_sec": 0, 00:19:31.971 "reconnect_delay_sec": 0, 00:19:31.971 "fast_io_fail_timeout_sec": 0, 00:19:31.971 "psk": "/tmp/tmp.YA65rcPQ6V", 00:19:31.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:31.971 "hdgst": false, 00:19:31.971 "ddgst": false 00:19:31.971 } 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "method": "bdev_nvme_set_hotplug", 00:19:31.971 "params": { 00:19:31.971 "period_us": 100000, 00:19:31.971 "enable": false 00:19:31.971 } 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "method": "bdev_wait_for_examine" 00:19:31.971 } 00:19:31.971 ] 00:19:31.971 }, 00:19:31.971 { 00:19:31.971 "subsystem": "nbd", 00:19:31.971 "config": [] 00:19:31.971 } 00:19:31.971 ] 00:19:31.971 }' 00:19:31.971 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3438007 00:19:31.971 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3438007 ']' 00:19:31.971 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3438007 00:19:31.971 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:31.971 18:14:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.971 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3438007 00:19:31.971 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:31.971 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:31.971 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3438007' 00:19:31.971 killing process with pid 3438007 00:19:31.971 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3438007 00:19:31.971 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.971 00:19:31.971 Latency(us) 00:19:31.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.971 =================================================================================================================== 00:19:31.971 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:31.971 [2024-07-24 18:14:25.039162] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:31.971 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3438007 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3437702 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3437702 ']' 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3437702 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3437702 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3437702' 00:19:32.229 killing process with pid 3437702 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3437702 00:19:32.229 [2024-07-24 18:14:25.262400] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:32.229 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3437702 00:19:32.487 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:32.487 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:32.487 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:32.487 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:32.487 "subsystems": [ 00:19:32.487 { 00:19:32.487 "subsystem": "keyring", 00:19:32.487 "config": [] 00:19:32.487 }, 00:19:32.487 { 00:19:32.487 "subsystem": "iobuf", 00:19:32.487 "config": [ 00:19:32.487 { 00:19:32.487 "method": "iobuf_set_options", 00:19:32.487 "params": { 00:19:32.487 "small_pool_count": 8192, 00:19:32.487 "large_pool_count": 1024, 00:19:32.487 "small_bufsize": 8192, 00:19:32.487 "large_bufsize": 135168 00:19:32.487 } 00:19:32.487 } 00:19:32.487 ] 00:19:32.487 }, 00:19:32.487 { 00:19:32.487 "subsystem": "sock", 00:19:32.487 "config": [ 00:19:32.487 { 00:19:32.487 "method": "sock_set_default_impl", 00:19:32.487 "params": { 00:19:32.487 "impl_name": "posix" 00:19:32.487 } 00:19:32.487 }, 00:19:32.487 { 00:19:32.487 "method": "sock_impl_set_options", 00:19:32.487 "params": { 00:19:32.487 "impl_name": "ssl", 00:19:32.487 "recv_buf_size": 4096, 00:19:32.487 "send_buf_size": 4096, 00:19:32.487 "enable_recv_pipe": true, 00:19:32.487 "enable_quickack": false, 00:19:32.487 "enable_placement_id": 0, 00:19:32.487 "enable_zerocopy_send_server": true, 00:19:32.487 "enable_zerocopy_send_client": false, 00:19:32.487 "zerocopy_threshold": 0, 00:19:32.487 "tls_version": 0, 00:19:32.487 "enable_ktls": false 00:19:32.487 } 00:19:32.487 }, 00:19:32.487 { 00:19:32.487 "method": "sock_impl_set_options", 00:19:32.487 "params": { 00:19:32.487 "impl_name": "posix", 00:19:32.487 "recv_buf_size": 2097152, 00:19:32.487 "send_buf_size": 2097152, 00:19:32.487 "enable_recv_pipe": true, 00:19:32.487 "enable_quickack": false, 00:19:32.487 "enable_placement_id": 0, 00:19:32.487 "enable_zerocopy_send_server": true, 00:19:32.487 "enable_zerocopy_send_client": false, 00:19:32.487 "zerocopy_threshold": 0, 00:19:32.487 "tls_version": 0, 00:19:32.487 "enable_ktls": false 00:19:32.487 } 00:19:32.487 } 00:19:32.487 ] 00:19:32.487 }, 00:19:32.487 { 00:19:32.487 "subsystem": "vmd", 00:19:32.487 "config": [] 00:19:32.487 }, 00:19:32.487 { 00:19:32.487 "subsystem": "accel", 00:19:32.487 "config": [ 00:19:32.487 { 00:19:32.487 "method": "accel_set_options", 00:19:32.487 "params": { 00:19:32.487 "small_cache_size": 128, 00:19:32.487 "large_cache_size": 16, 00:19:32.487 "task_count": 2048, 00:19:32.487 "sequence_count": 2048, 00:19:32.487 "buf_count": 2048 00:19:32.487 } 00:19:32.487 } 00:19:32.487 ] 00:19:32.487 }, 00:19:32.487 { 00:19:32.487 "subsystem": "bdev", 00:19:32.487 "config": [ 00:19:32.487 { 00:19:32.487 "method": "bdev_set_options", 00:19:32.487 "params": { 00:19:32.487 "bdev_io_pool_size": 65535, 00:19:32.487 "bdev_io_cache_size": 256, 00:19:32.487 "bdev_auto_examine": true, 00:19:32.487 "iobuf_small_cache_size": 128, 00:19:32.487 "iobuf_large_cache_size": 16 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "bdev_raid_set_options", 00:19:32.488 "params": { 00:19:32.488 "process_window_size_kb": 1024, 00:19:32.488 "process_max_bandwidth_mb_sec": 0 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "bdev_iscsi_set_options", 00:19:32.488 "params": { 00:19:32.488 "timeout_sec": 30 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "bdev_nvme_set_options", 00:19:32.488 "params": { 00:19:32.488 "action_on_timeout": "none", 00:19:32.488 "timeout_us": 0, 00:19:32.488 "timeout_admin_us": 0, 00:19:32.488 "keep_alive_timeout_ms": 10000, 00:19:32.488 "arbitration_burst": 0, 00:19:32.488 "low_priority_weight": 0, 00:19:32.488 "medium_priority_weight": 0, 00:19:32.488 "high_priority_weight": 0, 00:19:32.488 "nvme_adminq_poll_period_us": 10000, 00:19:32.488 "nvme_ioq_poll_period_us": 0, 00:19:32.488 "io_queue_requests": 0, 00:19:32.488 "delay_cmd_submit": true, 00:19:32.488 "transport_retry_count": 4, 00:19:32.488 "bdev_retry_count": 3, 00:19:32.488 "transport_ack_timeout": 0, 00:19:32.488 "ctrlr_loss_timeout_sec": 0, 00:19:32.488 "reconnect_delay_sec": 0, 00:19:32.488 "fast_io_fail_timeout_sec": 0, 00:19:32.488 "disable_auto_failback": false, 00:19:32.488 "generate_uuids": false, 00:19:32.488 "transport_tos": 0, 00:19:32.488 "nvme_error_stat": false, 00:19:32.488 "rdma_srq_size": 0, 00:19:32.488 "io_path_stat": false, 00:19:32.488 "allow_accel_sequence": false, 00:19:32.488 "rdma_max_cq_size": 0, 00:19:32.488 "rdma_cm_event_timeout_ms": 0, 00:19:32.488 "dhchap_digests": [ 00:19:32.488 "sha256", 00:19:32.488 "sha384", 00:19:32.488 "sha512" 00:19:32.488 ], 00:19:32.488 "dhchap_dhgroups": [ 00:19:32.488 "null", 00:19:32.488 "ffdhe2048", 00:19:32.488 "ffdhe3072", 00:19:32.488 "ffdhe4096", 00:19:32.488 "ffdhe6144", 00:19:32.488 "ffdhe8192" 00:19:32.488 ] 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "bdev_nvme_set_hotplug", 00:19:32.488 "params": { 00:19:32.488 "period_us": 100000, 00:19:32.488 "enable": false 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "bdev_malloc_create", 00:19:32.488 "params": { 00:19:32.488 "name": "malloc0", 00:19:32.488 "num_blocks": 8192, 00:19:32.488 "block_size": 4096, 00:19:32.488 "physical_block_size": 4096, 00:19:32.488 "uuid": "e8352dda-2d2a-4a0e-aeb3-4d4cb33efb2f", 00:19:32.488 "optimal_io_boundary": 0, 00:19:32.488 "md_size": 0, 00:19:32.488 "dif_type": 0, 00:19:32.488 "dif_is_head_of_md": false, 00:19:32.488 "dif_pi_format": 0 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "bdev_wait_for_examine" 00:19:32.488 } 00:19:32.488 ] 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "subsystem": "nbd", 00:19:32.488 "config": [] 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "subsystem": "scheduler", 00:19:32.488 "config": [ 00:19:32.488 { 00:19:32.488 "method": "framework_set_scheduler", 00:19:32.488 "params": { 00:19:32.488 "name": "static" 00:19:32.488 } 00:19:32.488 } 00:19:32.488 ] 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "subsystem": "nvmf", 00:19:32.488 "config": [ 00:19:32.488 { 00:19:32.488 "method": "nvmf_set_config", 00:19:32.488 "params": { 00:19:32.488 "discovery_filter": "match_any", 00:19:32.488 "admin_cmd_passthru": { 00:19:32.488 "identify_ctrlr": false 00:19:32.488 } 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "nvmf_set_max_subsystems", 00:19:32.488 "params": { 00:19:32.488 "max_subsystems": 1024 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "nvmf_set_crdt", 00:19:32.488 "params": { 00:19:32.488 "crdt1": 0, 00:19:32.488 "crdt2": 0, 00:19:32.488 "crdt3": 0 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "nvmf_create_transport", 00:19:32.488 "params": { 00:19:32.488 "trtype": "TCP", 00:19:32.488 "max_queue_depth": 128, 00:19:32.488 "max_io_qpairs_per_ctrlr": 127, 00:19:32.488 "in_capsule_data_size": 4096, 00:19:32.488 "max_io_size": 131072, 00:19:32.488 "io_unit_size": 131072, 00:19:32.488 "max_aq_depth": 128, 00:19:32.488 "num_shared_buffers": 511, 00:19:32.488 "buf_cache_size": 4294967295, 00:19:32.488 "dif_insert_or_strip": false, 00:19:32.488 "zcopy": false, 00:19:32.488 "c2h_success": false, 00:19:32.488 "sock_priority": 0, 00:19:32.488 "abort_timeout_sec": 1, 00:19:32.488 "ack_timeout": 0, 00:19:32.488 "data_wr_pool_size": 0 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "nvmf_create_subsystem", 00:19:32.488 "params": { 00:19:32.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.488 "allow_any_host": false, 00:19:32.488 "serial_number": "SPDK00000000000001", 00:19:32.488 "model_number": "SPDK bdev Controller", 00:19:32.488 "max_namespaces": 10, 00:19:32.488 "min_cntlid": 1, 00:19:32.488 "max_cntlid": 65519, 00:19:32.488 "ana_reporting": false 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "nvmf_subsystem_add_host", 00:19:32.488 "params": { 00:19:32.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.488 "host": "nqn.2016-06.io.spdk:host1", 00:19:32.488 "psk": "/tmp/tmp.YA65rcPQ6V" 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "nvmf_subsystem_add_ns", 00:19:32.488 "params": { 00:19:32.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.488 "namespace": { 00:19:32.488 "nsid": 1, 00:19:32.488 "bdev_name": "malloc0", 00:19:32.488 "nguid": "E8352DDA2D2A4A0EAEB34D4CB33EFB2F", 00:19:32.488 "uuid": "e8352dda-2d2a-4a0e-aeb3-4d4cb33efb2f", 00:19:32.488 "no_auto_visible": false 00:19:32.488 } 00:19:32.488 } 00:19:32.488 }, 00:19:32.488 { 00:19:32.488 "method": "nvmf_subsystem_add_listener", 00:19:32.488 "params": { 00:19:32.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.488 "listen_address": { 00:19:32.488 "trtype": "TCP", 00:19:32.488 "adrfam": "IPv4", 00:19:32.488 "traddr": "10.0.0.2", 00:19:32.488 "trsvcid": "4420" 00:19:32.488 }, 00:19:32.488 "secure_channel": true 00:19:32.488 } 00:19:32.488 } 00:19:32.488 ] 00:19:32.488 } 00:19:32.488 ] 00:19:32.488 }' 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3438264 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3438264 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3438264 ']' 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:32.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:32.488 18:14:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.488 [2024-07-24 18:14:25.503982] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:32.488 [2024-07-24 18:14:25.504030] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:32.488 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.488 [2024-07-24 18:14:25.563803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.747 [2024-07-24 18:14:25.635327] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.747 [2024-07-24 18:14:25.635367] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.747 [2024-07-24 18:14:25.635374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.747 [2024-07-24 18:14:25.635379] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.747 [2024-07-24 18:14:25.635384] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.747 [2024-07-24 18:14:25.635453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:33.005 [2024-07-24 18:14:25.838166] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.005 [2024-07-24 18:14:25.873803] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:33.005 [2024-07-24 18:14:25.889859] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.005 [2024-07-24 18:14:25.890026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.264 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:33.264 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:33.264 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:33.264 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:33.264 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3438509 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3438509 /var/tmp/bdevperf.sock 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3438509 ']' 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:33.265 "subsystems": [ 00:19:33.265 { 00:19:33.265 "subsystem": "keyring", 00:19:33.265 "config": [] 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "subsystem": "iobuf", 00:19:33.265 "config": [ 00:19:33.265 { 00:19:33.265 "method": "iobuf_set_options", 00:19:33.265 "params": { 00:19:33.265 "small_pool_count": 8192, 00:19:33.265 "large_pool_count": 1024, 00:19:33.265 "small_bufsize": 8192, 00:19:33.265 "large_bufsize": 135168 00:19:33.265 } 00:19:33.265 } 00:19:33.265 ] 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "subsystem": "sock", 00:19:33.265 "config": [ 00:19:33.265 { 00:19:33.265 "method": "sock_set_default_impl", 00:19:33.265 "params": { 00:19:33.265 "impl_name": "posix" 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "sock_impl_set_options", 00:19:33.265 "params": { 00:19:33.265 "impl_name": "ssl", 00:19:33.265 "recv_buf_size": 4096, 00:19:33.265 "send_buf_size": 4096, 00:19:33.265 "enable_recv_pipe": true, 00:19:33.265 "enable_quickack": false, 00:19:33.265 "enable_placement_id": 0, 00:19:33.265 "enable_zerocopy_send_server": true, 00:19:33.265 "enable_zerocopy_send_client": false, 00:19:33.265 "zerocopy_threshold": 0, 00:19:33.265 "tls_version": 0, 00:19:33.265 "enable_ktls": false 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "sock_impl_set_options", 00:19:33.265 "params": { 00:19:33.265 "impl_name": "posix", 00:19:33.265 "recv_buf_size": 2097152, 00:19:33.265 "send_buf_size": 2097152, 00:19:33.265 "enable_recv_pipe": true, 00:19:33.265 "enable_quickack": false, 00:19:33.265 "enable_placement_id": 0, 00:19:33.265 "enable_zerocopy_send_server": true, 00:19:33.265 "enable_zerocopy_send_client": false, 00:19:33.265 "zerocopy_threshold": 0, 00:19:33.265 "tls_version": 0, 00:19:33.265 "enable_ktls": false 00:19:33.265 } 00:19:33.265 } 00:19:33.265 ] 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "subsystem": "vmd", 00:19:33.265 "config": [] 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "subsystem": "accel", 00:19:33.265 "config": [ 00:19:33.265 { 00:19:33.265 "method": "accel_set_options", 00:19:33.265 "params": { 00:19:33.265 "small_cache_size": 128, 00:19:33.265 "large_cache_size": 16, 00:19:33.265 "task_count": 2048, 00:19:33.265 "sequence_count": 2048, 00:19:33.265 "buf_count": 2048 00:19:33.265 } 00:19:33.265 } 00:19:33.265 ] 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "subsystem": "bdev", 00:19:33.265 "config": [ 00:19:33.265 { 00:19:33.265 "method": "bdev_set_options", 00:19:33.265 "params": { 00:19:33.265 "bdev_io_pool_size": 65535, 00:19:33.265 "bdev_io_cache_size": 256, 00:19:33.265 "bdev_auto_examine": true, 00:19:33.265 "iobuf_small_cache_size": 128, 00:19:33.265 "iobuf_large_cache_size": 16 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "bdev_raid_set_options", 00:19:33.265 "params": { 00:19:33.265 "process_window_size_kb": 1024, 00:19:33.265 "process_max_bandwidth_mb_sec": 0 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "bdev_iscsi_set_options", 00:19:33.265 "params": { 00:19:33.265 "timeout_sec": 30 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "bdev_nvme_set_options", 00:19:33.265 "params": { 00:19:33.265 "action_on_timeout": "none", 00:19:33.265 "timeout_us": 0, 00:19:33.265 "timeout_admin_us": 0, 00:19:33.265 "keep_alive_timeout_ms": 10000, 00:19:33.265 "arbitration_burst": 0, 00:19:33.265 "low_priority_weight": 0, 00:19:33.265 "medium_priority_weight": 0, 00:19:33.265 "high_priority_weight": 0, 00:19:33.265 "nvme_adminq_poll_period_us": 10000, 00:19:33.265 "nvme_ioq_poll_period_us": 0, 00:19:33.265 "io_queue_requests": 512, 00:19:33.265 "delay_cmd_submit": true, 00:19:33.265 "transport_retry_count": 4, 00:19:33.265 "bdev_retry_count": 3, 00:19:33.265 "transport_ack_timeout": 0, 00:19:33.265 "ctrlr_loss_timeout_sec": 0, 00:19:33.265 "reconnect_delay_sec": 0, 00:19:33.265 "fast_io_fail_timeout_sec": 0, 00:19:33.265 "disable_auto_failback": false, 00:19:33.265 "generate_uuids": false, 00:19:33.265 "transport_tos": 0, 00:19:33.265 "nvme_error_stat": false, 00:19:33.265 "rdma_srq_size": 0, 00:19:33.265 "io_path_stat": false, 00:19:33.265 "allow_accel_sequence": false, 00:19:33.265 "rdma_max_cq_size": 0, 00:19:33.265 "rdma_cm_event_timeout_ms": 0, 00:19:33.265 "dhchap_digests": [ 00:19:33.265 "sha256", 00:19:33.265 "sha384", 00:19:33.265 "sha512" 00:19:33.265 ], 00:19:33.265 "dhchap_dhgroups": [ 00:19:33.265 "null", 00:19:33.265 "ffdhe2048", 00:19:33.265 "ffdhe3072", 00:19:33.265 "ffdhe4096", 00:19:33.265 "ffdhe6144", 00:19:33.265 "ffdhe8192" 00:19:33.265 ] 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "bdev_nvme_attach_controller", 00:19:33.265 "params": { 00:19:33.265 "name": "TLSTEST", 00:19:33.265 "trtype": "TCP", 00:19:33.265 "adrfam": "IPv4", 00:19:33.265 "traddr": "10.0.0.2", 00:19:33.265 "trsvcid": "4420", 00:19:33.265 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.265 "prchk_reftag": false, 00:19:33.265 "prchk_guard": false, 00:19:33.265 "ctrlr_loss_timeout_sec": 0, 00:19:33.265 "reconnect_delay_sec": 0, 00:19:33.265 "fast_io_fail_timeout_sec": 0, 00:19:33.265 "psk": "/tmp/tmp.YA65rcPQ6V", 00:19:33.265 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.265 "hdgst": false, 00:19:33.265 "ddgst": false 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "bdev_nvme_set_hotplug", 00:19:33.265 "params": { 00:19:33.265 "period_us": 100000, 00:19:33.265 "enable": false 00:19:33.265 } 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "method": "bdev_wait_for_examine" 00:19:33.265 } 00:19:33.265 ] 00:19:33.265 }, 00:19:33.265 { 00:19:33.265 "subsystem": "nbd", 00:19:33.265 "config": [] 00:19:33.265 } 00:19:33.265 ] 00:19:33.265 }' 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:33.265 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:33.266 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.266 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:33.266 18:14:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:33.525 [2024-07-24 18:14:26.385503] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:33.525 [2024-07-24 18:14:26.385551] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3438509 ] 00:19:33.525 EAL: No free 2048 kB hugepages reported on node 1 00:19:33.525 [2024-07-24 18:14:26.434014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:33.525 [2024-07-24 18:14:26.506826] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.784 [2024-07-24 18:14:26.647758] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.784 [2024-07-24 18:14:26.647834] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:34.351 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.351 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:34.351 18:14:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:34.351 Running I/O for 10 seconds... 00:19:44.396 00:19:44.396 Latency(us) 00:19:44.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.396 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:44.396 Verification LBA range: start 0x0 length 0x2000 00:19:44.396 TLSTESTn1 : 10.02 5639.85 22.03 0.00 0.00 22657.25 6678.43 28461.35 00:19:44.396 =================================================================================================================== 00:19:44.396 Total : 5639.85 22.03 0.00 0.00 22657.25 6678.43 28461.35 00:19:44.396 0 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3438509 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3438509 ']' 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3438509 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3438509 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3438509' 00:19:44.396 killing process with pid 3438509 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3438509 00:19:44.396 Received shutdown signal, test time was about 10.000000 seconds 00:19:44.396 00:19:44.396 Latency(us) 00:19:44.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.396 =================================================================================================================== 00:19:44.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:44.396 [2024-07-24 18:14:37.383371] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:44.396 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3438509 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3438264 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3438264 ']' 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3438264 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3438264 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3438264' 00:19:44.655 killing process with pid 3438264 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3438264 00:19:44.655 [2024-07-24 18:14:37.605713] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:44.655 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3438264 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3440352 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3440352 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3440352 ']' 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:44.915 18:14:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:44.915 [2024-07-24 18:14:37.854896] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:44.915 [2024-07-24 18:14:37.854943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.915 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.915 [2024-07-24 18:14:37.909005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.915 [2024-07-24 18:14:37.986480] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.915 [2024-07-24 18:14:37.986521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.915 [2024-07-24 18:14:37.986528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.915 [2024-07-24 18:14:37.986534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.915 [2024-07-24 18:14:37.986538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.915 [2024-07-24 18:14:37.986562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.YA65rcPQ6V 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.YA65rcPQ6V 00:19:45.851 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:45.851 [2024-07-24 18:14:38.850129] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:45.852 18:14:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:46.110 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:46.110 [2024-07-24 18:14:39.187003] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:46.110 [2024-07-24 18:14:39.187221] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.369 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:46.369 malloc0 00:19:46.369 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YA65rcPQ6V 00:19:46.628 [2024-07-24 18:14:39.684279] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3440616 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3440616 /var/tmp/bdevperf.sock 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3440616 ']' 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:46.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:46.628 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.887 [2024-07-24 18:14:39.734394] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:46.887 [2024-07-24 18:14:39.734442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3440616 ] 00:19:46.887 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.887 [2024-07-24 18:14:39.791499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.887 [2024-07-24 18:14:39.866152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.887 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.887 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:46.887 18:14:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YA65rcPQ6V 00:19:47.145 18:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:47.404 [2024-07-24 18:14:40.275878] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:47.404 nvme0n1 00:19:47.404 18:14:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:47.404 Running I/O for 1 seconds... 00:19:48.781 00:19:48.781 Latency(us) 00:19:48.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.781 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:48.781 Verification LBA range: start 0x0 length 0x2000 00:19:48.781 nvme0n1 : 1.02 4674.42 18.26 0.00 0.00 27163.69 5118.05 31207.62 00:19:48.781 =================================================================================================================== 00:19:48.781 Total : 4674.42 18.26 0.00 0.00 27163.69 5118.05 31207.62 00:19:48.781 0 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3440616 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3440616 ']' 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3440616 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3440616 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3440616' 00:19:48.781 killing process with pid 3440616 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3440616 00:19:48.781 Received shutdown signal, test time was about 1.000000 seconds 00:19:48.781 00:19:48.781 Latency(us) 00:19:48.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.781 =================================================================================================================== 00:19:48.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3440616 00:19:48.781 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3440352 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3440352 ']' 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3440352 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3440352 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3440352' 00:19:48.782 killing process with pid 3440352 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3440352 00:19:48.782 [2024-07-24 18:14:41.760199] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:48.782 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3440352 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3441079 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3441079 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3441079 ']' 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.041 18:14:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.041 [2024-07-24 18:14:42.006733] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:49.041 [2024-07-24 18:14:42.006781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.041 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.041 [2024-07-24 18:14:42.062764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.300 [2024-07-24 18:14:42.130750] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.300 [2024-07-24 18:14:42.130785] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.300 [2024-07-24 18:14:42.130792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.300 [2024-07-24 18:14:42.130797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.300 [2024-07-24 18:14:42.130802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.300 [2024-07-24 18:14:42.130820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.868 [2024-07-24 18:14:42.853891] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.868 malloc0 00:19:49.868 [2024-07-24 18:14:42.882063] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.868 [2024-07-24 18:14:42.892763] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3441309 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3441309 /var/tmp/bdevperf.sock 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3441309 ']' 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.868 18:14:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.127 [2024-07-24 18:14:42.963457] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:50.127 [2024-07-24 18:14:42.963500] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441309 ] 00:19:50.127 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.127 [2024-07-24 18:14:43.018902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.127 [2024-07-24 18:14:43.097940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.692 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.692 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:50.692 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.YA65rcPQ6V 00:19:50.950 18:14:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:51.208 [2024-07-24 18:14:44.108165] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.208 nvme0n1 00:19:51.208 18:14:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.466 Running I/O for 1 seconds... 00:19:52.402 00:19:52.402 Latency(us) 00:19:52.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.402 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.402 Verification LBA range: start 0x0 length 0x2000 00:19:52.402 nvme0n1 : 1.01 5509.23 21.52 0.00 0.00 23055.98 6491.18 27213.04 00:19:52.402 =================================================================================================================== 00:19:52.402 Total : 5509.23 21.52 0.00 0.00 23055.98 6491.18 27213.04 00:19:52.402 0 00:19:52.402 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:52.402 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.402 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.402 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.402 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:52.402 "subsystems": [ 00:19:52.402 { 00:19:52.402 "subsystem": "keyring", 00:19:52.402 "config": [ 00:19:52.402 { 00:19:52.402 "method": "keyring_file_add_key", 00:19:52.402 "params": { 00:19:52.402 "name": "key0", 00:19:52.402 "path": "/tmp/tmp.YA65rcPQ6V" 00:19:52.402 } 00:19:52.402 } 00:19:52.402 ] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "iobuf", 00:19:52.402 "config": [ 00:19:52.402 { 00:19:52.402 "method": "iobuf_set_options", 00:19:52.402 "params": { 00:19:52.402 "small_pool_count": 8192, 00:19:52.402 "large_pool_count": 1024, 00:19:52.402 "small_bufsize": 8192, 00:19:52.402 "large_bufsize": 135168 00:19:52.402 } 00:19:52.402 } 00:19:52.402 ] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "sock", 00:19:52.402 "config": [ 00:19:52.402 { 00:19:52.402 "method": "sock_set_default_impl", 00:19:52.402 "params": { 00:19:52.402 "impl_name": "posix" 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "sock_impl_set_options", 00:19:52.402 "params": { 00:19:52.402 "impl_name": "ssl", 00:19:52.402 "recv_buf_size": 4096, 00:19:52.402 "send_buf_size": 4096, 00:19:52.402 "enable_recv_pipe": true, 00:19:52.402 "enable_quickack": false, 00:19:52.402 "enable_placement_id": 0, 00:19:52.402 "enable_zerocopy_send_server": true, 00:19:52.402 "enable_zerocopy_send_client": false, 00:19:52.402 "zerocopy_threshold": 0, 00:19:52.402 "tls_version": 0, 00:19:52.402 "enable_ktls": false 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "sock_impl_set_options", 00:19:52.402 "params": { 00:19:52.402 "impl_name": "posix", 00:19:52.402 "recv_buf_size": 2097152, 00:19:52.402 "send_buf_size": 2097152, 00:19:52.402 "enable_recv_pipe": true, 00:19:52.402 "enable_quickack": false, 00:19:52.402 "enable_placement_id": 0, 00:19:52.402 "enable_zerocopy_send_server": true, 00:19:52.402 "enable_zerocopy_send_client": false, 00:19:52.402 "zerocopy_threshold": 0, 00:19:52.402 "tls_version": 0, 00:19:52.402 "enable_ktls": false 00:19:52.402 } 00:19:52.402 } 00:19:52.402 ] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "vmd", 00:19:52.402 "config": [] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "accel", 00:19:52.402 "config": [ 00:19:52.402 { 00:19:52.402 "method": "accel_set_options", 00:19:52.402 "params": { 00:19:52.402 "small_cache_size": 128, 00:19:52.402 "large_cache_size": 16, 00:19:52.402 "task_count": 2048, 00:19:52.402 "sequence_count": 2048, 00:19:52.402 "buf_count": 2048 00:19:52.402 } 00:19:52.402 } 00:19:52.402 ] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "bdev", 00:19:52.402 "config": [ 00:19:52.402 { 00:19:52.402 "method": "bdev_set_options", 00:19:52.402 "params": { 00:19:52.402 "bdev_io_pool_size": 65535, 00:19:52.402 "bdev_io_cache_size": 256, 00:19:52.402 "bdev_auto_examine": true, 00:19:52.402 "iobuf_small_cache_size": 128, 00:19:52.402 "iobuf_large_cache_size": 16 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "bdev_raid_set_options", 00:19:52.402 "params": { 00:19:52.402 "process_window_size_kb": 1024, 00:19:52.402 "process_max_bandwidth_mb_sec": 0 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "bdev_iscsi_set_options", 00:19:52.402 "params": { 00:19:52.402 "timeout_sec": 30 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "bdev_nvme_set_options", 00:19:52.402 "params": { 00:19:52.402 "action_on_timeout": "none", 00:19:52.402 "timeout_us": 0, 00:19:52.402 "timeout_admin_us": 0, 00:19:52.402 "keep_alive_timeout_ms": 10000, 00:19:52.402 "arbitration_burst": 0, 00:19:52.402 "low_priority_weight": 0, 00:19:52.402 "medium_priority_weight": 0, 00:19:52.402 "high_priority_weight": 0, 00:19:52.402 "nvme_adminq_poll_period_us": 10000, 00:19:52.402 "nvme_ioq_poll_period_us": 0, 00:19:52.402 "io_queue_requests": 0, 00:19:52.402 "delay_cmd_submit": true, 00:19:52.402 "transport_retry_count": 4, 00:19:52.402 "bdev_retry_count": 3, 00:19:52.402 "transport_ack_timeout": 0, 00:19:52.402 "ctrlr_loss_timeout_sec": 0, 00:19:52.402 "reconnect_delay_sec": 0, 00:19:52.402 "fast_io_fail_timeout_sec": 0, 00:19:52.402 "disable_auto_failback": false, 00:19:52.402 "generate_uuids": false, 00:19:52.402 "transport_tos": 0, 00:19:52.402 "nvme_error_stat": false, 00:19:52.402 "rdma_srq_size": 0, 00:19:52.402 "io_path_stat": false, 00:19:52.402 "allow_accel_sequence": false, 00:19:52.402 "rdma_max_cq_size": 0, 00:19:52.402 "rdma_cm_event_timeout_ms": 0, 00:19:52.402 "dhchap_digests": [ 00:19:52.402 "sha256", 00:19:52.402 "sha384", 00:19:52.402 "sha512" 00:19:52.402 ], 00:19:52.402 "dhchap_dhgroups": [ 00:19:52.402 "null", 00:19:52.402 "ffdhe2048", 00:19:52.402 "ffdhe3072", 00:19:52.402 "ffdhe4096", 00:19:52.402 "ffdhe6144", 00:19:52.402 "ffdhe8192" 00:19:52.402 ] 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "bdev_nvme_set_hotplug", 00:19:52.402 "params": { 00:19:52.402 "period_us": 100000, 00:19:52.402 "enable": false 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "bdev_malloc_create", 00:19:52.402 "params": { 00:19:52.402 "name": "malloc0", 00:19:52.402 "num_blocks": 8192, 00:19:52.402 "block_size": 4096, 00:19:52.402 "physical_block_size": 4096, 00:19:52.402 "uuid": "fb041ed8-fda3-411c-b53e-801674ce5646", 00:19:52.402 "optimal_io_boundary": 0, 00:19:52.402 "md_size": 0, 00:19:52.402 "dif_type": 0, 00:19:52.402 "dif_is_head_of_md": false, 00:19:52.402 "dif_pi_format": 0 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "bdev_wait_for_examine" 00:19:52.402 } 00:19:52.402 ] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "nbd", 00:19:52.402 "config": [] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "scheduler", 00:19:52.402 "config": [ 00:19:52.402 { 00:19:52.402 "method": "framework_set_scheduler", 00:19:52.402 "params": { 00:19:52.402 "name": "static" 00:19:52.402 } 00:19:52.402 } 00:19:52.402 ] 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "subsystem": "nvmf", 00:19:52.402 "config": [ 00:19:52.402 { 00:19:52.402 "method": "nvmf_set_config", 00:19:52.402 "params": { 00:19:52.402 "discovery_filter": "match_any", 00:19:52.402 "admin_cmd_passthru": { 00:19:52.402 "identify_ctrlr": false 00:19:52.402 } 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "nvmf_set_max_subsystems", 00:19:52.402 "params": { 00:19:52.402 "max_subsystems": 1024 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "nvmf_set_crdt", 00:19:52.402 "params": { 00:19:52.402 "crdt1": 0, 00:19:52.402 "crdt2": 0, 00:19:52.402 "crdt3": 0 00:19:52.402 } 00:19:52.402 }, 00:19:52.402 { 00:19:52.402 "method": "nvmf_create_transport", 00:19:52.402 "params": { 00:19:52.402 "trtype": "TCP", 00:19:52.402 "max_queue_depth": 128, 00:19:52.403 "max_io_qpairs_per_ctrlr": 127, 00:19:52.403 "in_capsule_data_size": 4096, 00:19:52.403 "max_io_size": 131072, 00:19:52.403 "io_unit_size": 131072, 00:19:52.403 "max_aq_depth": 128, 00:19:52.403 "num_shared_buffers": 511, 00:19:52.403 "buf_cache_size": 4294967295, 00:19:52.403 "dif_insert_or_strip": false, 00:19:52.403 "zcopy": false, 00:19:52.403 "c2h_success": false, 00:19:52.403 "sock_priority": 0, 00:19:52.403 "abort_timeout_sec": 1, 00:19:52.403 "ack_timeout": 0, 00:19:52.403 "data_wr_pool_size": 0 00:19:52.403 } 00:19:52.403 }, 00:19:52.403 { 00:19:52.403 "method": "nvmf_create_subsystem", 00:19:52.403 "params": { 00:19:52.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.403 "allow_any_host": false, 00:19:52.403 "serial_number": "00000000000000000000", 00:19:52.403 "model_number": "SPDK bdev Controller", 00:19:52.403 "max_namespaces": 32, 00:19:52.403 "min_cntlid": 1, 00:19:52.403 "max_cntlid": 65519, 00:19:52.403 "ana_reporting": false 00:19:52.403 } 00:19:52.403 }, 00:19:52.403 { 00:19:52.403 "method": "nvmf_subsystem_add_host", 00:19:52.403 "params": { 00:19:52.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.403 "host": "nqn.2016-06.io.spdk:host1", 00:19:52.403 "psk": "key0" 00:19:52.403 } 00:19:52.403 }, 00:19:52.403 { 00:19:52.403 "method": "nvmf_subsystem_add_ns", 00:19:52.403 "params": { 00:19:52.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.403 "namespace": { 00:19:52.403 "nsid": 1, 00:19:52.403 "bdev_name": "malloc0", 00:19:52.403 "nguid": "FB041ED8FDA3411CB53E801674CE5646", 00:19:52.403 "uuid": "fb041ed8-fda3-411c-b53e-801674ce5646", 00:19:52.403 "no_auto_visible": false 00:19:52.403 } 00:19:52.403 } 00:19:52.403 }, 00:19:52.403 { 00:19:52.403 "method": "nvmf_subsystem_add_listener", 00:19:52.403 "params": { 00:19:52.403 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.403 "listen_address": { 00:19:52.403 "trtype": "TCP", 00:19:52.403 "adrfam": "IPv4", 00:19:52.403 "traddr": "10.0.0.2", 00:19:52.403 "trsvcid": "4420" 00:19:52.403 }, 00:19:52.403 "secure_channel": false, 00:19:52.403 "sock_impl": "ssl" 00:19:52.403 } 00:19:52.403 } 00:19:52.403 ] 00:19:52.403 } 00:19:52.403 ] 00:19:52.403 }' 00:19:52.403 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:52.661 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:52.661 "subsystems": [ 00:19:52.661 { 00:19:52.661 "subsystem": "keyring", 00:19:52.661 "config": [ 00:19:52.661 { 00:19:52.661 "method": "keyring_file_add_key", 00:19:52.661 "params": { 00:19:52.661 "name": "key0", 00:19:52.661 "path": "/tmp/tmp.YA65rcPQ6V" 00:19:52.661 } 00:19:52.661 } 00:19:52.661 ] 00:19:52.661 }, 00:19:52.661 { 00:19:52.661 "subsystem": "iobuf", 00:19:52.661 "config": [ 00:19:52.661 { 00:19:52.661 "method": "iobuf_set_options", 00:19:52.661 "params": { 00:19:52.661 "small_pool_count": 8192, 00:19:52.661 "large_pool_count": 1024, 00:19:52.661 "small_bufsize": 8192, 00:19:52.661 "large_bufsize": 135168 00:19:52.661 } 00:19:52.661 } 00:19:52.661 ] 00:19:52.661 }, 00:19:52.661 { 00:19:52.661 "subsystem": "sock", 00:19:52.661 "config": [ 00:19:52.661 { 00:19:52.661 "method": "sock_set_default_impl", 00:19:52.661 "params": { 00:19:52.661 "impl_name": "posix" 00:19:52.661 } 00:19:52.661 }, 00:19:52.661 { 00:19:52.661 "method": "sock_impl_set_options", 00:19:52.661 "params": { 00:19:52.661 "impl_name": "ssl", 00:19:52.661 "recv_buf_size": 4096, 00:19:52.661 "send_buf_size": 4096, 00:19:52.661 "enable_recv_pipe": true, 00:19:52.661 "enable_quickack": false, 00:19:52.661 "enable_placement_id": 0, 00:19:52.661 "enable_zerocopy_send_server": true, 00:19:52.661 "enable_zerocopy_send_client": false, 00:19:52.661 "zerocopy_threshold": 0, 00:19:52.661 "tls_version": 0, 00:19:52.661 "enable_ktls": false 00:19:52.661 } 00:19:52.661 }, 00:19:52.661 { 00:19:52.661 "method": "sock_impl_set_options", 00:19:52.661 "params": { 00:19:52.661 "impl_name": "posix", 00:19:52.661 "recv_buf_size": 2097152, 00:19:52.661 "send_buf_size": 2097152, 00:19:52.661 "enable_recv_pipe": true, 00:19:52.661 "enable_quickack": false, 00:19:52.661 "enable_placement_id": 0, 00:19:52.661 "enable_zerocopy_send_server": true, 00:19:52.661 "enable_zerocopy_send_client": false, 00:19:52.661 "zerocopy_threshold": 0, 00:19:52.661 "tls_version": 0, 00:19:52.661 "enable_ktls": false 00:19:52.661 } 00:19:52.661 } 00:19:52.661 ] 00:19:52.661 }, 00:19:52.661 { 00:19:52.661 "subsystem": "vmd", 00:19:52.661 "config": [] 00:19:52.661 }, 00:19:52.662 { 00:19:52.662 "subsystem": "accel", 00:19:52.662 "config": [ 00:19:52.662 { 00:19:52.662 "method": "accel_set_options", 00:19:52.662 "params": { 00:19:52.662 "small_cache_size": 128, 00:19:52.662 "large_cache_size": 16, 00:19:52.662 "task_count": 2048, 00:19:52.662 "sequence_count": 2048, 00:19:52.662 "buf_count": 2048 00:19:52.662 } 00:19:52.662 } 00:19:52.662 ] 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "subsystem": "bdev", 00:19:52.662 "config": [ 00:19:52.662 { 00:19:52.662 "method": "bdev_set_options", 00:19:52.662 "params": { 00:19:52.662 "bdev_io_pool_size": 65535, 00:19:52.662 "bdev_io_cache_size": 256, 00:19:52.662 "bdev_auto_examine": true, 00:19:52.662 "iobuf_small_cache_size": 128, 00:19:52.662 "iobuf_large_cache_size": 16 00:19:52.662 } 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "method": "bdev_raid_set_options", 00:19:52.662 "params": { 00:19:52.662 "process_window_size_kb": 1024, 00:19:52.662 "process_max_bandwidth_mb_sec": 0 00:19:52.662 } 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "method": "bdev_iscsi_set_options", 00:19:52.662 "params": { 00:19:52.662 "timeout_sec": 30 00:19:52.662 } 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "method": "bdev_nvme_set_options", 00:19:52.662 "params": { 00:19:52.662 "action_on_timeout": "none", 00:19:52.662 "timeout_us": 0, 00:19:52.662 "timeout_admin_us": 0, 00:19:52.662 "keep_alive_timeout_ms": 10000, 00:19:52.662 "arbitration_burst": 0, 00:19:52.662 "low_priority_weight": 0, 00:19:52.662 "medium_priority_weight": 0, 00:19:52.662 "high_priority_weight": 0, 00:19:52.662 "nvme_adminq_poll_period_us": 10000, 00:19:52.662 "nvme_ioq_poll_period_us": 0, 00:19:52.662 "io_queue_requests": 512, 00:19:52.662 "delay_cmd_submit": true, 00:19:52.662 "transport_retry_count": 4, 00:19:52.662 "bdev_retry_count": 3, 00:19:52.662 "transport_ack_timeout": 0, 00:19:52.662 "ctrlr_loss_timeout_sec": 0, 00:19:52.662 "reconnect_delay_sec": 0, 00:19:52.662 "fast_io_fail_timeout_sec": 0, 00:19:52.662 "disable_auto_failback": false, 00:19:52.662 "generate_uuids": false, 00:19:52.662 "transport_tos": 0, 00:19:52.662 "nvme_error_stat": false, 00:19:52.662 "rdma_srq_size": 0, 00:19:52.662 "io_path_stat": false, 00:19:52.662 "allow_accel_sequence": false, 00:19:52.662 "rdma_max_cq_size": 0, 00:19:52.662 "rdma_cm_event_timeout_ms": 0, 00:19:52.662 "dhchap_digests": [ 00:19:52.662 "sha256", 00:19:52.662 "sha384", 00:19:52.662 "sha512" 00:19:52.662 ], 00:19:52.662 "dhchap_dhgroups": [ 00:19:52.662 "null", 00:19:52.662 "ffdhe2048", 00:19:52.662 "ffdhe3072", 00:19:52.662 "ffdhe4096", 00:19:52.662 "ffdhe6144", 00:19:52.662 "ffdhe8192" 00:19:52.662 ] 00:19:52.662 } 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "method": "bdev_nvme_attach_controller", 00:19:52.662 "params": { 00:19:52.662 "name": "nvme0", 00:19:52.662 "trtype": "TCP", 00:19:52.662 "adrfam": "IPv4", 00:19:52.662 "traddr": "10.0.0.2", 00:19:52.662 "trsvcid": "4420", 00:19:52.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:52.662 "prchk_reftag": false, 00:19:52.662 "prchk_guard": false, 00:19:52.662 "ctrlr_loss_timeout_sec": 0, 00:19:52.662 "reconnect_delay_sec": 0, 00:19:52.662 "fast_io_fail_timeout_sec": 0, 00:19:52.662 "psk": "key0", 00:19:52.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:52.662 "hdgst": false, 00:19:52.662 "ddgst": false 00:19:52.662 } 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "method": "bdev_nvme_set_hotplug", 00:19:52.662 "params": { 00:19:52.662 "period_us": 100000, 00:19:52.662 "enable": false 00:19:52.662 } 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "method": "bdev_enable_histogram", 00:19:52.662 "params": { 00:19:52.662 "name": "nvme0n1", 00:19:52.662 "enable": true 00:19:52.662 } 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "method": "bdev_wait_for_examine" 00:19:52.662 } 00:19:52.662 ] 00:19:52.662 }, 00:19:52.662 { 00:19:52.662 "subsystem": "nbd", 00:19:52.662 "config": [] 00:19:52.662 } 00:19:52.662 ] 00:19:52.662 }' 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3441309 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3441309 ']' 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3441309 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3441309 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3441309' 00:19:52.662 killing process with pid 3441309 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3441309 00:19:52.662 Received shutdown signal, test time was about 1.000000 seconds 00:19:52.662 00:19:52.662 Latency(us) 00:19:52.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.662 =================================================================================================================== 00:19:52.662 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.662 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3441309 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3441079 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3441079 ']' 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3441079 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3441079 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3441079' 00:19:52.921 killing process with pid 3441079 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3441079 00:19:52.921 18:14:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3441079 00:19:53.180 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:53.180 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.180 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.180 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:53.180 "subsystems": [ 00:19:53.180 { 00:19:53.180 "subsystem": "keyring", 00:19:53.180 "config": [ 00:19:53.180 { 00:19:53.180 "method": "keyring_file_add_key", 00:19:53.180 "params": { 00:19:53.180 "name": "key0", 00:19:53.180 "path": "/tmp/tmp.YA65rcPQ6V" 00:19:53.180 } 00:19:53.180 } 00:19:53.180 ] 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "subsystem": "iobuf", 00:19:53.180 "config": [ 00:19:53.180 { 00:19:53.180 "method": "iobuf_set_options", 00:19:53.180 "params": { 00:19:53.180 "small_pool_count": 8192, 00:19:53.180 "large_pool_count": 1024, 00:19:53.180 "small_bufsize": 8192, 00:19:53.180 "large_bufsize": 135168 00:19:53.180 } 00:19:53.180 } 00:19:53.180 ] 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "subsystem": "sock", 00:19:53.180 "config": [ 00:19:53.180 { 00:19:53.180 "method": "sock_set_default_impl", 00:19:53.180 "params": { 00:19:53.180 "impl_name": "posix" 00:19:53.180 } 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "method": "sock_impl_set_options", 00:19:53.180 "params": { 00:19:53.180 "impl_name": "ssl", 00:19:53.180 "recv_buf_size": 4096, 00:19:53.180 "send_buf_size": 4096, 00:19:53.180 "enable_recv_pipe": true, 00:19:53.180 "enable_quickack": false, 00:19:53.180 "enable_placement_id": 0, 00:19:53.180 "enable_zerocopy_send_server": true, 00:19:53.180 "enable_zerocopy_send_client": false, 00:19:53.180 "zerocopy_threshold": 0, 00:19:53.180 "tls_version": 0, 00:19:53.180 "enable_ktls": false 00:19:53.180 } 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "method": "sock_impl_set_options", 00:19:53.180 "params": { 00:19:53.180 "impl_name": "posix", 00:19:53.180 "recv_buf_size": 2097152, 00:19:53.180 "send_buf_size": 2097152, 00:19:53.180 "enable_recv_pipe": true, 00:19:53.180 "enable_quickack": false, 00:19:53.180 "enable_placement_id": 0, 00:19:53.180 "enable_zerocopy_send_server": true, 00:19:53.180 "enable_zerocopy_send_client": false, 00:19:53.180 "zerocopy_threshold": 0, 00:19:53.180 "tls_version": 0, 00:19:53.180 "enable_ktls": false 00:19:53.180 } 00:19:53.180 } 00:19:53.180 ] 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "subsystem": "vmd", 00:19:53.180 "config": [] 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "subsystem": "accel", 00:19:53.180 "config": [ 00:19:53.180 { 00:19:53.180 "method": "accel_set_options", 00:19:53.180 "params": { 00:19:53.180 "small_cache_size": 128, 00:19:53.180 "large_cache_size": 16, 00:19:53.180 "task_count": 2048, 00:19:53.180 "sequence_count": 2048, 00:19:53.180 "buf_count": 2048 00:19:53.180 } 00:19:53.180 } 00:19:53.180 ] 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "subsystem": "bdev", 00:19:53.180 "config": [ 00:19:53.180 { 00:19:53.180 "method": "bdev_set_options", 00:19:53.180 "params": { 00:19:53.180 "bdev_io_pool_size": 65535, 00:19:53.180 "bdev_io_cache_size": 256, 00:19:53.180 "bdev_auto_examine": true, 00:19:53.180 "iobuf_small_cache_size": 128, 00:19:53.180 "iobuf_large_cache_size": 16 00:19:53.180 } 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "method": "bdev_raid_set_options", 00:19:53.180 "params": { 00:19:53.180 "process_window_size_kb": 1024, 00:19:53.180 "process_max_bandwidth_mb_sec": 0 00:19:53.180 } 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "method": "bdev_iscsi_set_options", 00:19:53.180 "params": { 00:19:53.180 "timeout_sec": 30 00:19:53.180 } 00:19:53.180 }, 00:19:53.180 { 00:19:53.180 "method": "bdev_nvme_set_options", 00:19:53.180 "params": { 00:19:53.180 "action_on_timeout": "none", 00:19:53.180 "timeout_us": 0, 00:19:53.180 "timeout_admin_us": 0, 00:19:53.180 "keep_alive_timeout_ms": 10000, 00:19:53.180 "arbitration_burst": 0, 00:19:53.180 "low_priority_weight": 0, 00:19:53.180 "medium_priority_weight": 0, 00:19:53.180 "high_priority_weight": 0, 00:19:53.180 "nvme_adminq_poll_period_us": 10000, 00:19:53.180 "nvme_ioq_poll_period_us": 0, 00:19:53.180 "io_queue_requests": 0, 00:19:53.180 "delay_cmd_submit": true, 00:19:53.180 "transport_retry_count": 4, 00:19:53.180 "bdev_retry_count": 3, 00:19:53.180 "transport_ack_timeout": 0, 00:19:53.180 "ctrlr_loss_timeout_sec": 0, 00:19:53.180 "reconnect_delay_sec": 0, 00:19:53.180 "fast_io_fail_timeout_sec": 0, 00:19:53.180 "disable_auto_failback": false, 00:19:53.180 "generate_uuids": false, 00:19:53.180 "transport_tos": 0, 00:19:53.180 "nvme_error_stat": false, 00:19:53.180 "rdma_srq_size": 0, 00:19:53.180 "io_path_stat": false, 00:19:53.180 "allow_accel_sequence": false, 00:19:53.180 "rdma_max_cq_size": 0, 00:19:53.180 "rdma_cm_event_timeout_ms": 0, 00:19:53.180 "dhchap_digests": [ 00:19:53.180 "sha256", 00:19:53.180 "sha384", 00:19:53.180 "sha512" 00:19:53.181 ], 00:19:53.181 "dhchap_dhgroups": [ 00:19:53.181 "null", 00:19:53.181 "ffdhe2048", 00:19:53.181 "ffdhe3072", 00:19:53.181 "ffdhe4096", 00:19:53.181 "ffdhe6144", 00:19:53.181 "ffdhe8192" 00:19:53.181 ] 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "bdev_nvme_set_hotplug", 00:19:53.181 "params": { 00:19:53.181 "period_us": 100000, 00:19:53.181 "enable": false 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "bdev_malloc_create", 00:19:53.181 "params": { 00:19:53.181 "name": "malloc0", 00:19:53.181 "num_blocks": 8192, 00:19:53.181 "block_size": 4096, 00:19:53.181 "physical_block_size": 4096, 00:19:53.181 "uuid": "fb041ed8-fda3-411c-b53e-801674ce5646", 00:19:53.181 "optimal_io_boundary": 0, 00:19:53.181 "md_size": 0, 00:19:53.181 "dif_type": 0, 00:19:53.181 "dif_is_head_of_md": false, 00:19:53.181 "dif_pi_format": 0 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "bdev_wait_for_examine" 00:19:53.181 } 00:19:53.181 ] 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "subsystem": "nbd", 00:19:53.181 "config": [] 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "subsystem": "scheduler", 00:19:53.181 "config": [ 00:19:53.181 { 00:19:53.181 "method": "framework_set_scheduler", 00:19:53.181 "params": { 00:19:53.181 "name": "static" 00:19:53.181 } 00:19:53.181 } 00:19:53.181 ] 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "subsystem": "nvmf", 00:19:53.181 "config": [ 00:19:53.181 { 00:19:53.181 "method": "nvmf_set_config", 00:19:53.181 "params": { 00:19:53.181 "discovery_filter": "match_any", 00:19:53.181 "admin_cmd_passthru": { 00:19:53.181 "identify_ctrlr": false 00:19:53.181 } 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "nvmf_set_max_subsystems", 00:19:53.181 "params": { 00:19:53.181 "max_subsystems": 1024 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "nvmf_set_crdt", 00:19:53.181 "params": { 00:19:53.181 "crdt1": 0, 00:19:53.181 "crdt2": 0, 00:19:53.181 "crdt3": 0 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "nvmf_create_transport", 00:19:53.181 "params": { 00:19:53.181 "trtype": "TCP", 00:19:53.181 "max_queue_depth": 128, 00:19:53.181 "max_io_qpairs_per_ctrlr": 127, 00:19:53.181 "in_capsule_data_size": 4096, 00:19:53.181 "max_io_size": 131072, 00:19:53.181 "io_unit_size": 131072, 00:19:53.181 "max_aq_depth": 128, 00:19:53.181 "num_shared_buffers": 511, 00:19:53.181 "buf_cache_size": 4294967295, 00:19:53.181 "dif_insert_or_strip": false, 00:19:53.181 "zcopy": false, 00:19:53.181 "c2h_success": false, 00:19:53.181 "sock_priority": 0, 00:19:53.181 "abort_timeout_sec": 1, 00:19:53.181 "ack_timeout": 0, 00:19:53.181 "data_wr_pool_size": 0 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "nvmf_create_subsystem", 00:19:53.181 "params": { 00:19:53.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.181 "allow_any_host": false, 00:19:53.181 "serial_number": "00000000000000000000", 00:19:53.181 "model_number": "SPDK bdev Controller", 00:19:53.181 "max_namespaces": 32, 00:19:53.181 "min_cntlid": 1, 00:19:53.181 "max_cntlid": 65519, 00:19:53.181 "ana_reporting": false 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "nvmf_subsystem_add_host", 00:19:53.181 "params": { 00:19:53.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.181 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.181 "psk": "key0" 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "nvmf_subsystem_add_ns", 00:19:53.181 "params": { 00:19:53.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.181 "namespace": { 00:19:53.181 "nsid": 1, 00:19:53.181 "bdev_name": "malloc0", 00:19:53.181 "nguid": "FB041ED8FDA3411CB53E801674CE5646", 00:19:53.181 "uuid": "fb041ed8-fda3-411c-b53e-801674ce5646", 00:19:53.181 "no_auto_visible": false 00:19:53.181 } 00:19:53.181 } 00:19:53.181 }, 00:19:53.181 { 00:19:53.181 "method": "nvmf_subsystem_add_listener", 00:19:53.181 "params": { 00:19:53.181 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.181 "listen_address": { 00:19:53.181 "trtype": "TCP", 00:19:53.181 "adrfam": "IPv4", 00:19:53.181 "traddr": "10.0.0.2", 00:19:53.181 "trsvcid": "4420" 00:19:53.181 }, 00:19:53.181 "secure_channel": false, 00:19:53.181 "sock_impl": "ssl" 00:19:53.181 } 00:19:53.181 } 00:19:53.181 ] 00:19:53.181 } 00:19:53.181 ] 00:19:53.181 }' 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3441805 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3441805 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3441805 ']' 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.181 18:14:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.181 [2024-07-24 18:14:46.204709] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:53.181 [2024-07-24 18:14:46.204755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.181 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.181 [2024-07-24 18:14:46.262033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.440 [2024-07-24 18:14:46.340074] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.440 [2024-07-24 18:14:46.340110] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.440 [2024-07-24 18:14:46.340117] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.440 [2024-07-24 18:14:46.340123] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.440 [2024-07-24 18:14:46.340128] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.440 [2024-07-24 18:14:46.340170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.698 [2024-07-24 18:14:46.549229] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.698 [2024-07-24 18:14:46.591594] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.698 [2024-07-24 18:14:46.591791] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.956 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.956 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:53.956 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.956 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.956 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.956 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3441966 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3441966 /var/tmp/bdevperf.sock 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 3441966 ']' 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:54.215 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:54.215 "subsystems": [ 00:19:54.215 { 00:19:54.215 "subsystem": "keyring", 00:19:54.215 "config": [ 00:19:54.215 { 00:19:54.215 "method": "keyring_file_add_key", 00:19:54.215 "params": { 00:19:54.215 "name": "key0", 00:19:54.215 "path": "/tmp/tmp.YA65rcPQ6V" 00:19:54.215 } 00:19:54.215 } 00:19:54.215 ] 00:19:54.215 }, 00:19:54.215 { 00:19:54.215 "subsystem": "iobuf", 00:19:54.215 "config": [ 00:19:54.215 { 00:19:54.215 "method": "iobuf_set_options", 00:19:54.215 "params": { 00:19:54.215 "small_pool_count": 8192, 00:19:54.215 "large_pool_count": 1024, 00:19:54.215 "small_bufsize": 8192, 00:19:54.215 "large_bufsize": 135168 00:19:54.215 } 00:19:54.215 } 00:19:54.215 ] 00:19:54.215 }, 00:19:54.215 { 00:19:54.215 "subsystem": "sock", 00:19:54.215 "config": [ 00:19:54.215 { 00:19:54.215 "method": "sock_set_default_impl", 00:19:54.215 "params": { 00:19:54.215 "impl_name": "posix" 00:19:54.215 } 00:19:54.215 }, 00:19:54.215 { 00:19:54.215 "method": "sock_impl_set_options", 00:19:54.215 "params": { 00:19:54.215 "impl_name": "ssl", 00:19:54.215 "recv_buf_size": 4096, 00:19:54.215 "send_buf_size": 4096, 00:19:54.215 "enable_recv_pipe": true, 00:19:54.215 "enable_quickack": false, 00:19:54.215 "enable_placement_id": 0, 00:19:54.215 "enable_zerocopy_send_server": true, 00:19:54.215 "enable_zerocopy_send_client": false, 00:19:54.215 "zerocopy_threshold": 0, 00:19:54.215 "tls_version": 0, 00:19:54.215 "enable_ktls": false 00:19:54.215 } 00:19:54.215 }, 00:19:54.215 { 00:19:54.215 "method": "sock_impl_set_options", 00:19:54.215 "params": { 00:19:54.215 "impl_name": "posix", 00:19:54.215 "recv_buf_size": 2097152, 00:19:54.215 "send_buf_size": 2097152, 00:19:54.215 "enable_recv_pipe": true, 00:19:54.215 "enable_quickack": false, 00:19:54.215 "enable_placement_id": 0, 00:19:54.215 "enable_zerocopy_send_server": true, 00:19:54.215 "enable_zerocopy_send_client": false, 00:19:54.215 "zerocopy_threshold": 0, 00:19:54.215 "tls_version": 0, 00:19:54.215 "enable_ktls": false 00:19:54.215 } 00:19:54.215 } 00:19:54.215 ] 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "subsystem": "vmd", 00:19:54.216 "config": [] 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "subsystem": "accel", 00:19:54.216 "config": [ 00:19:54.216 { 00:19:54.216 "method": "accel_set_options", 00:19:54.216 "params": { 00:19:54.216 "small_cache_size": 128, 00:19:54.216 "large_cache_size": 16, 00:19:54.216 "task_count": 2048, 00:19:54.216 "sequence_count": 2048, 00:19:54.216 "buf_count": 2048 00:19:54.216 } 00:19:54.216 } 00:19:54.216 ] 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "subsystem": "bdev", 00:19:54.216 "config": [ 00:19:54.216 { 00:19:54.216 "method": "bdev_set_options", 00:19:54.216 "params": { 00:19:54.216 "bdev_io_pool_size": 65535, 00:19:54.216 "bdev_io_cache_size": 256, 00:19:54.216 "bdev_auto_examine": true, 00:19:54.216 "iobuf_small_cache_size": 128, 00:19:54.216 "iobuf_large_cache_size": 16 00:19:54.216 } 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "method": "bdev_raid_set_options", 00:19:54.216 "params": { 00:19:54.216 "process_window_size_kb": 1024, 00:19:54.216 "process_max_bandwidth_mb_sec": 0 00:19:54.216 } 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "method": "bdev_iscsi_set_options", 00:19:54.216 "params": { 00:19:54.216 "timeout_sec": 30 00:19:54.216 } 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "method": "bdev_nvme_set_options", 00:19:54.216 "params": { 00:19:54.216 "action_on_timeout": "none", 00:19:54.216 "timeout_us": 0, 00:19:54.216 "timeout_admin_us": 0, 00:19:54.216 "keep_alive_timeout_ms": 10000, 00:19:54.216 "arbitration_burst": 0, 00:19:54.216 "low_priority_weight": 0, 00:19:54.216 "medium_priority_weight": 0, 00:19:54.216 "high_priority_weight": 0, 00:19:54.216 "nvme_adminq_poll_period_us": 10000, 00:19:54.216 "nvme_ioq_poll_period_us": 0, 00:19:54.216 "io_queue_requests": 512, 00:19:54.216 "delay_cmd_submit": true, 00:19:54.216 "transport_retry_count": 4, 00:19:54.216 "bdev_retry_count": 3, 00:19:54.216 "transport_ack_timeout": 0, 00:19:54.216 "ctrlr_loss_timeout_sec": 0, 00:19:54.216 "reconnect_delay_sec": 0, 00:19:54.216 "fast_io_fail_timeout_sec": 0, 00:19:54.216 "disable_auto_failback": false, 00:19:54.216 "generate_uuids": false, 00:19:54.216 "transport_tos": 0, 00:19:54.216 "nvme_error_stat": false, 00:19:54.216 "rdma_srq_size": 0, 00:19:54.216 "io_path_stat": false, 00:19:54.216 "allow_accel_sequence": false, 00:19:54.216 "rdma_max_cq_size": 0, 00:19:54.216 "rdma_cm_event_timeout_ms": 0, 00:19:54.216 "dhchap_digests": [ 00:19:54.216 "sha256", 00:19:54.216 "sha384", 00:19:54.216 "sha512" 00:19:54.216 ], 00:19:54.216 "dhchap_dhgroups": [ 00:19:54.216 "null", 00:19:54.216 "ffdhe2048", 00:19:54.216 "ffdhe3072", 00:19:54.216 "ffdhe4096", 00:19:54.216 "ffdhe6144", 00:19:54.216 "ffdhe8192" 00:19:54.216 ] 00:19:54.216 } 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "method": "bdev_nvme_attach_controller", 00:19:54.216 "params": { 00:19:54.216 "name": "nvme0", 00:19:54.216 "trtype": "TCP", 00:19:54.216 "adrfam": "IPv4", 00:19:54.216 "traddr": "10.0.0.2", 00:19:54.216 "trsvcid": "4420", 00:19:54.216 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.216 "prchk_reftag": false, 00:19:54.216 "prchk_guard": false, 00:19:54.216 "ctrlr_loss_timeout_sec": 0, 00:19:54.216 "reconnect_delay_sec": 0, 00:19:54.216 "fast_io_fail_timeout_sec": 0, 00:19:54.216 "psk": "key0", 00:19:54.216 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.216 "hdgst": false, 00:19:54.216 "ddgst": false 00:19:54.216 } 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "method": "bdev_nvme_set_hotplug", 00:19:54.216 "params": { 00:19:54.216 "period_us": 100000, 00:19:54.216 "enable": false 00:19:54.216 } 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "method": "bdev_enable_histogram", 00:19:54.216 "params": { 00:19:54.216 "name": "nvme0n1", 00:19:54.216 "enable": true 00:19:54.216 } 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "method": "bdev_wait_for_examine" 00:19:54.216 } 00:19:54.216 ] 00:19:54.216 }, 00:19:54.216 { 00:19:54.216 "subsystem": "nbd", 00:19:54.216 "config": [] 00:19:54.216 } 00:19:54.216 ] 00:19:54.216 }' 00:19:54.216 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.216 [2024-07-24 18:14:47.082135] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:19:54.216 [2024-07-24 18:14:47.082182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441966 ] 00:19:54.216 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.216 [2024-07-24 18:14:47.137599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.216 [2024-07-24 18:14:47.217810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.475 [2024-07-24 18:14:47.368829] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.042 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:55.042 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:19:55.042 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:55.042 18:14:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:55.042 18:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.042 18:14:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.301 Running I/O for 1 seconds... 00:19:56.238 00:19:56.238 Latency(us) 00:19:56.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.238 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.238 Verification LBA range: start 0x0 length 0x2000 00:19:56.238 nvme0n1 : 1.01 5469.50 21.37 0.00 0.00 23234.22 5679.79 42192.70 00:19:56.238 =================================================================================================================== 00:19:56.238 Total : 5469.50 21.37 0.00 0.00 23234.22 5679.79 42192.70 00:19:56.238 0 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:56.238 nvmf_trace.0 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3441966 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3441966 ']' 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3441966 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3441966 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3441966' 00:19:56.238 killing process with pid 3441966 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3441966 00:19:56.238 Received shutdown signal, test time was about 1.000000 seconds 00:19:56.238 00:19:56.238 Latency(us) 00:19:56.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.238 =================================================================================================================== 00:19:56.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.238 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3441966 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:56.497 rmmod nvme_tcp 00:19:56.497 rmmod nvme_fabrics 00:19:56.497 rmmod nvme_keyring 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3441805 ']' 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3441805 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 3441805 ']' 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 3441805 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:56.497 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3441805 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3441805' 00:19:56.756 killing process with pid 3441805 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 3441805 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 3441805 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.756 18:14:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.38E7TpkiFh /tmp/tmp.5XRBNSiM7K /tmp/tmp.YA65rcPQ6V 00:19:59.291 00:19:59.291 real 1m23.948s 00:19:59.291 user 2m8.972s 00:19:59.291 sys 0m28.950s 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.291 ************************************ 00:19:59.291 END TEST nvmf_tls 00:19:59.291 ************************************ 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.291 ************************************ 00:19:59.291 START TEST nvmf_fips 00:19:59.291 ************************************ 00:19:59.291 18:14:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:59.291 * Looking for test storage... 00:19:59.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:59.291 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:19:59.292 Error setting digest 00:19:59.292 00F261310D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:59.292 00F261310D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:19:59.292 18:14:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:04.563 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:04.563 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:04.563 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:04.564 Found net devices under 0000:86:00.0: cvl_0_0 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:04.564 Found net devices under 0000:86:00.1: cvl_0_1 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:04.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:20:04.564 00:20:04.564 --- 10.0.0.2 ping statistics --- 00:20:04.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.564 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:04.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:20:04.564 00:20:04.564 --- 10.0.0.1 ping statistics --- 00:20:04.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.564 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3445845 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3445845 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3445845 ']' 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:04.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:04.564 18:14:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:04.564 [2024-07-24 18:14:57.393482] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:20:04.564 [2024-07-24 18:14:57.393529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.564 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.564 [2024-07-24 18:14:57.442486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.564 [2024-07-24 18:14:57.518166] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.564 [2024-07-24 18:14:57.518199] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.564 [2024-07-24 18:14:57.518206] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:04.564 [2024-07-24 18:14:57.518212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:04.564 [2024-07-24 18:14:57.518217] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.564 [2024-07-24 18:14:57.518233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.131 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:05.131 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:05.132 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:05.132 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:05.132 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:05.132 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:05.390 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:05.390 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:05.390 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:05.390 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:05.390 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:05.391 [2024-07-24 18:14:58.368130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.391 [2024-07-24 18:14:58.384134] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:05.391 [2024-07-24 18:14:58.384306] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.391 [2024-07-24 18:14:58.412376] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:05.391 malloc0 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3445986 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3445986 /var/tmp/bdevperf.sock 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 3445986 ']' 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:05.391 18:14:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:05.649 [2024-07-24 18:14:58.494266] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:20:05.650 [2024-07-24 18:14:58.494322] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3445986 ] 00:20:05.650 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.650 [2024-07-24 18:14:58.544528] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.650 [2024-07-24 18:14:58.618196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.217 18:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:06.217 18:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:20:06.217 18:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:06.476 [2024-07-24 18:14:59.424343] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:06.476 [2024-07-24 18:14:59.424419] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:06.476 TLSTESTn1 00:20:06.476 18:14:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:06.734 Running I/O for 10 seconds... 00:20:16.736 00:20:16.736 Latency(us) 00:20:16.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.736 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:16.736 Verification LBA range: start 0x0 length 0x2000 00:20:16.736 TLSTESTn1 : 10.02 5628.91 21.99 0.00 0.00 22704.71 6616.02 29459.99 00:20:16.736 =================================================================================================================== 00:20:16.736 Total : 5628.91 21.99 0.00 0.00 22704.71 6616.02 29459.99 00:20:16.736 0 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:16.736 nvmf_trace.0 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3445986 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3445986 ']' 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3445986 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3445986 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3445986' 00:20:16.736 killing process with pid 3445986 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3445986 00:20:16.736 Received shutdown signal, test time was about 10.000000 seconds 00:20:16.736 00:20:16.736 Latency(us) 00:20:16.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:16.736 =================================================================================================================== 00:20:16.736 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:16.736 [2024-07-24 18:15:09.794333] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:16.736 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3445986 00:20:16.996 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:16.996 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:16.996 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:16.996 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:16.996 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:16.996 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:16.996 18:15:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:16.996 rmmod nvme_tcp 00:20:16.996 rmmod nvme_fabrics 00:20:16.996 rmmod nvme_keyring 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3445845 ']' 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3445845 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 3445845 ']' 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 3445845 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:16.996 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3445845 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3445845' 00:20:17.255 killing process with pid 3445845 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 3445845 00:20:17.255 [2024-07-24 18:15:10.087012] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 3445845 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:17.255 18:15:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:19.790 00:20:19.790 real 0m20.417s 00:20:19.790 user 0m22.413s 00:20:19.790 sys 0m8.717s 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:19.790 ************************************ 00:20:19.790 END TEST nvmf_fips 00:20:19.790 ************************************ 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:19.790 18:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:25.059 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:25.059 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:25.059 Found net devices under 0000:86:00.0: cvl_0_0 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:25.059 Found net devices under 0000:86:00.1: cvl_0_1 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:25.059 18:15:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:25.060 ************************************ 00:20:25.060 START TEST nvmf_perf_adq 00:20:25.060 ************************************ 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:25.060 * Looking for test storage... 00:20:25.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:25.060 18:15:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:30.331 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:30.331 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:30.331 Found net devices under 0000:86:00.0: cvl_0_0 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:30.331 Found net devices under 0000:86:00.1: cvl_0_1 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:30.331 18:15:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:30.589 18:15:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:33.122 18:15:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:38.399 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:38.400 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:38.400 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:38.400 Found net devices under 0000:86:00.0: cvl_0_0 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:38.400 Found net devices under 0000:86:00.1: cvl_0_1 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.400 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.400 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:20:38.400 00:20:38.400 --- 10.0.0.2 ping statistics --- 00:20:38.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.400 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.400 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.400 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:20:38.400 00:20:38.400 --- 10.0.0.1 ping statistics --- 00:20:38.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.400 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3456099 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3456099 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3456099 ']' 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:38.400 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.401 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:38.401 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.401 18:15:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:38.401 [2024-07-24 18:15:31.000853] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:20:38.401 [2024-07-24 18:15:31.000898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.401 EAL: No free 2048 kB hugepages reported on node 1 00:20:38.401 [2024-07-24 18:15:31.058637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.401 [2024-07-24 18:15:31.140666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.401 [2024-07-24 18:15:31.140697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.401 [2024-07-24 18:15:31.140706] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.401 [2024-07-24 18:15:31.140712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.401 [2024-07-24 18:15:31.140717] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.401 [2024-07-24 18:15:31.140761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:38.401 [2024-07-24 18:15:31.140779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.401 [2024-07-24 18:15:31.140869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.401 [2024-07-24 18:15:31.140870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 [2024-07-24 18:15:31.988334] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 Malloc1 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:38.970 [2024-07-24 18:15:32.040090] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3456348 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:38.970 18:15:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:39.229 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:41.138 "tick_rate": 2100000000, 00:20:41.138 "poll_groups": [ 00:20:41.138 { 00:20:41.138 "name": "nvmf_tgt_poll_group_000", 00:20:41.138 "admin_qpairs": 1, 00:20:41.138 "io_qpairs": 1, 00:20:41.138 "current_admin_qpairs": 1, 00:20:41.138 "current_io_qpairs": 1, 00:20:41.138 "pending_bdev_io": 0, 00:20:41.138 "completed_nvme_io": 20690, 00:20:41.138 "transports": [ 00:20:41.138 { 00:20:41.138 "trtype": "TCP" 00:20:41.138 } 00:20:41.138 ] 00:20:41.138 }, 00:20:41.138 { 00:20:41.138 "name": "nvmf_tgt_poll_group_001", 00:20:41.138 "admin_qpairs": 0, 00:20:41.138 "io_qpairs": 1, 00:20:41.138 "current_admin_qpairs": 0, 00:20:41.138 "current_io_qpairs": 1, 00:20:41.138 "pending_bdev_io": 0, 00:20:41.138 "completed_nvme_io": 20708, 00:20:41.138 "transports": [ 00:20:41.138 { 00:20:41.138 "trtype": "TCP" 00:20:41.138 } 00:20:41.138 ] 00:20:41.138 }, 00:20:41.138 { 00:20:41.138 "name": "nvmf_tgt_poll_group_002", 00:20:41.138 "admin_qpairs": 0, 00:20:41.138 "io_qpairs": 1, 00:20:41.138 "current_admin_qpairs": 0, 00:20:41.138 "current_io_qpairs": 1, 00:20:41.138 "pending_bdev_io": 0, 00:20:41.138 "completed_nvme_io": 20745, 00:20:41.138 "transports": [ 00:20:41.138 { 00:20:41.138 "trtype": "TCP" 00:20:41.138 } 00:20:41.138 ] 00:20:41.138 }, 00:20:41.138 { 00:20:41.138 "name": "nvmf_tgt_poll_group_003", 00:20:41.138 "admin_qpairs": 0, 00:20:41.138 "io_qpairs": 1, 00:20:41.138 "current_admin_qpairs": 0, 00:20:41.138 "current_io_qpairs": 1, 00:20:41.138 "pending_bdev_io": 0, 00:20:41.138 "completed_nvme_io": 20754, 00:20:41.138 "transports": [ 00:20:41.138 { 00:20:41.138 "trtype": "TCP" 00:20:41.138 } 00:20:41.138 ] 00:20:41.138 } 00:20:41.138 ] 00:20:41.138 }' 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:41.138 18:15:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3456348 00:20:49.257 Initializing NVMe Controllers 00:20:49.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:49.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:49.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:49.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:49.257 Initialization complete. Launching workers. 00:20:49.257 ======================================================== 00:20:49.257 Latency(us) 00:20:49.257 Device Information : IOPS MiB/s Average min max 00:20:49.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10616.00 41.47 6030.37 1677.80 9269.39 00:20:49.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10764.10 42.05 5945.30 1951.98 9858.89 00:20:49.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10762.00 42.04 5946.91 2063.07 9880.67 00:20:49.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10674.20 41.70 5996.83 2168.95 11967.83 00:20:49.257 ======================================================== 00:20:49.257 Total : 42816.29 167.25 5979.64 1677.80 11967.83 00:20:49.257 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.257 rmmod nvme_tcp 00:20:49.257 rmmod nvme_fabrics 00:20:49.257 rmmod nvme_keyring 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3456099 ']' 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3456099 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3456099 ']' 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3456099 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3456099 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3456099' 00:20:49.257 killing process with pid 3456099 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3456099 00:20:49.257 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3456099 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.516 18:15:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:52.069 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:52.069 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:52.069 18:15:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:52.637 18:15:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:54.543 18:15:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:59.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:59.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:59.897 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:59.898 Found net devices under 0000:86:00.0: cvl_0_0 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:59.898 Found net devices under 0000:86:00.1: cvl_0_1 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:59.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:59.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:20:59.898 00:20:59.898 --- 10.0.0.2 ping statistics --- 00:20:59.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.898 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:59.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:59.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:20:59.898 00:20:59.898 --- 10.0.0.1 ping statistics --- 00:20:59.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:59.898 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:59.898 net.core.busy_poll = 1 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:59.898 net.core.busy_read = 1 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:59.898 18:15:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3460135 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3460135 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 3460135 ']' 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:00.157 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.158 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:00.158 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.158 18:15:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:00.158 [2024-07-24 18:15:53.189816] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:00.158 [2024-07-24 18:15:53.189861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.158 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.416 [2024-07-24 18:15:53.249133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:00.416 [2024-07-24 18:15:53.328161] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.416 [2024-07-24 18:15:53.328195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.416 [2024-07-24 18:15:53.328202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.416 [2024-07-24 18:15:53.328207] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.416 [2024-07-24 18:15:53.328212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.416 [2024-07-24 18:15:53.328485] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:00.416 [2024-07-24 18:15:53.328584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:00.416 [2024-07-24 18:15:53.328603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:00.416 [2024-07-24 18:15:53.328605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:00.984 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 [2024-07-24 18:15:54.183384] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 Malloc1 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:01.244 [2024-07-24 18:15:54.230703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3460388 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:01.244 18:15:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:01.244 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:03.780 "tick_rate": 2100000000, 00:21:03.780 "poll_groups": [ 00:21:03.780 { 00:21:03.780 "name": "nvmf_tgt_poll_group_000", 00:21:03.780 "admin_qpairs": 1, 00:21:03.780 "io_qpairs": 1, 00:21:03.780 "current_admin_qpairs": 1, 00:21:03.780 "current_io_qpairs": 1, 00:21:03.780 "pending_bdev_io": 0, 00:21:03.780 "completed_nvme_io": 28043, 00:21:03.780 "transports": [ 00:21:03.780 { 00:21:03.780 "trtype": "TCP" 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "name": "nvmf_tgt_poll_group_001", 00:21:03.780 "admin_qpairs": 0, 00:21:03.780 "io_qpairs": 3, 00:21:03.780 "current_admin_qpairs": 0, 00:21:03.780 "current_io_qpairs": 3, 00:21:03.780 "pending_bdev_io": 0, 00:21:03.780 "completed_nvme_io": 30695, 00:21:03.780 "transports": [ 00:21:03.780 { 00:21:03.780 "trtype": "TCP" 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "name": "nvmf_tgt_poll_group_002", 00:21:03.780 "admin_qpairs": 0, 00:21:03.780 "io_qpairs": 0, 00:21:03.780 "current_admin_qpairs": 0, 00:21:03.780 "current_io_qpairs": 0, 00:21:03.780 "pending_bdev_io": 0, 00:21:03.780 "completed_nvme_io": 0, 00:21:03.780 "transports": [ 00:21:03.780 { 00:21:03.780 "trtype": "TCP" 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }, 00:21:03.780 { 00:21:03.780 "name": "nvmf_tgt_poll_group_003", 00:21:03.780 "admin_qpairs": 0, 00:21:03.780 "io_qpairs": 0, 00:21:03.780 "current_admin_qpairs": 0, 00:21:03.780 "current_io_qpairs": 0, 00:21:03.780 "pending_bdev_io": 0, 00:21:03.780 "completed_nvme_io": 0, 00:21:03.780 "transports": [ 00:21:03.780 { 00:21:03.780 "trtype": "TCP" 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 } 00:21:03.780 ] 00:21:03.780 }' 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:03.780 18:15:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3460388 00:21:11.902 Initializing NVMe Controllers 00:21:11.902 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:11.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:11.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:11.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:11.902 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:11.902 Initialization complete. Launching workers. 00:21:11.902 ======================================================== 00:21:11.902 Latency(us) 00:21:11.902 Device Information : IOPS MiB/s Average min max 00:21:11.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5597.10 21.86 11436.41 1552.47 59589.43 00:21:11.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14887.50 58.15 4298.84 1319.38 7600.43 00:21:11.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5437.90 21.24 11770.58 1557.93 57120.52 00:21:11.902 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5119.00 20.00 12537.62 1664.15 58980.47 00:21:11.902 ======================================================== 00:21:11.902 Total : 31041.49 121.26 8253.37 1319.38 59589.43 00:21:11.902 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:11.902 rmmod nvme_tcp 00:21:11.902 rmmod nvme_fabrics 00:21:11.902 rmmod nvme_keyring 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3460135 ']' 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3460135 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 3460135 ']' 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 3460135 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3460135 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3460135' 00:21:11.902 killing process with pid 3460135 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 3460135 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 3460135 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:11.902 18:16:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:15.187 00:21:15.187 real 0m50.279s 00:21:15.187 user 2m49.143s 00:21:15.187 sys 0m8.699s 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:15.187 ************************************ 00:21:15.187 END TEST nvmf_perf_adq 00:21:15.187 ************************************ 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:15.187 ************************************ 00:21:15.187 START TEST nvmf_shutdown 00:21:15.187 ************************************ 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:15.187 * Looking for test storage... 00:21:15.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:15.187 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:15.188 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:15.188 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:15.188 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:15.188 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:15.188 18:16:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:15.188 ************************************ 00:21:15.188 START TEST nvmf_shutdown_tc1 00:21:15.188 ************************************ 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:15.188 18:16:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:20.465 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:20.465 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:20.465 Found net devices under 0000:86:00.0: cvl_0_0 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:20.465 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:20.465 Found net devices under 0000:86:00.1: cvl_0_1 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:20.466 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:20.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:21:20.728 00:21:20.728 --- 10.0.0.2 ping statistics --- 00:21:20.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.728 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:20.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:21:20.728 00:21:20.728 --- 10.0.0.1 ping statistics --- 00:21:20.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.728 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3465611 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3465611 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3465611 ']' 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:20.728 18:16:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:20.728 [2024-07-24 18:16:13.694100] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:20.728 [2024-07-24 18:16:13.694142] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.728 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.728 [2024-07-24 18:16:13.752178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:20.983 [2024-07-24 18:16:13.829709] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.983 [2024-07-24 18:16:13.829747] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.983 [2024-07-24 18:16:13.829755] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.983 [2024-07-24 18:16:13.829761] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.983 [2024-07-24 18:16:13.829767] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.984 [2024-07-24 18:16:13.829801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.984 [2024-07-24 18:16:13.829869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.984 [2024-07-24 18:16:13.829976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.984 [2024-07-24 18:16:13.829976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.546 [2024-07-24 18:16:14.538923] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.546 18:16:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:21.546 Malloc1 00:21:21.803 [2024-07-24 18:16:14.638574] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.803 Malloc2 00:21:21.803 Malloc3 00:21:21.803 Malloc4 00:21:21.803 Malloc5 00:21:21.803 Malloc6 00:21:21.803 Malloc7 00:21:22.061 Malloc8 00:21:22.061 Malloc9 00:21:22.061 Malloc10 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3465896 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3465896 /var/tmp/bdevperf.sock 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 3465896 ']' 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:22.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.061 { 00:21:22.061 "params": { 00:21:22.061 "name": "Nvme$subsystem", 00:21:22.061 "trtype": "$TEST_TRANSPORT", 00:21:22.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.061 "adrfam": "ipv4", 00:21:22.061 "trsvcid": "$NVMF_PORT", 00:21:22.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.061 "hdgst": ${hdgst:-false}, 00:21:22.061 "ddgst": ${ddgst:-false} 00:21:22.061 }, 00:21:22.061 "method": "bdev_nvme_attach_controller" 00:21:22.061 } 00:21:22.061 EOF 00:21:22.061 )") 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.061 { 00:21:22.061 "params": { 00:21:22.061 "name": "Nvme$subsystem", 00:21:22.061 "trtype": "$TEST_TRANSPORT", 00:21:22.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.061 "adrfam": "ipv4", 00:21:22.061 "trsvcid": "$NVMF_PORT", 00:21:22.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.061 "hdgst": ${hdgst:-false}, 00:21:22.061 "ddgst": ${ddgst:-false} 00:21:22.061 }, 00:21:22.061 "method": "bdev_nvme_attach_controller" 00:21:22.061 } 00:21:22.061 EOF 00:21:22.061 )") 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.061 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.061 { 00:21:22.061 "params": { 00:21:22.061 "name": "Nvme$subsystem", 00:21:22.061 "trtype": "$TEST_TRANSPORT", 00:21:22.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.061 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.062 { 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme$subsystem", 00:21:22.062 "trtype": "$TEST_TRANSPORT", 00:21:22.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.062 { 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme$subsystem", 00:21:22.062 "trtype": "$TEST_TRANSPORT", 00:21:22.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.062 { 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme$subsystem", 00:21:22.062 "trtype": "$TEST_TRANSPORT", 00:21:22.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 [2024-07-24 18:16:15.103929] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:22.062 [2024-07-24 18:16:15.103974] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.062 { 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme$subsystem", 00:21:22.062 "trtype": "$TEST_TRANSPORT", 00:21:22.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.062 { 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme$subsystem", 00:21:22.062 "trtype": "$TEST_TRANSPORT", 00:21:22.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.062 { 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme$subsystem", 00:21:22.062 "trtype": "$TEST_TRANSPORT", 00:21:22.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:22.062 { 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme$subsystem", 00:21:22.062 "trtype": "$TEST_TRANSPORT", 00:21:22.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "$NVMF_PORT", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:22.062 "hdgst": ${hdgst:-false}, 00:21:22.062 "ddgst": ${ddgst:-false} 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 } 00:21:22.062 EOF 00:21:22.062 )") 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:22.062 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:22.062 18:16:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme1", 00:21:22.062 "trtype": "tcp", 00:21:22.062 "traddr": "10.0.0.2", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "4420", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:22.062 "hdgst": false, 00:21:22.062 "ddgst": false 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 },{ 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme2", 00:21:22.062 "trtype": "tcp", 00:21:22.062 "traddr": "10.0.0.2", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "4420", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:22.062 "hdgst": false, 00:21:22.062 "ddgst": false 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 },{ 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme3", 00:21:22.062 "trtype": "tcp", 00:21:22.062 "traddr": "10.0.0.2", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "4420", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:22.062 "hdgst": false, 00:21:22.062 "ddgst": false 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 },{ 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme4", 00:21:22.062 "trtype": "tcp", 00:21:22.062 "traddr": "10.0.0.2", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "4420", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:22.062 "hdgst": false, 00:21:22.062 "ddgst": false 00:21:22.062 }, 00:21:22.062 "method": "bdev_nvme_attach_controller" 00:21:22.062 },{ 00:21:22.062 "params": { 00:21:22.062 "name": "Nvme5", 00:21:22.062 "trtype": "tcp", 00:21:22.062 "traddr": "10.0.0.2", 00:21:22.062 "adrfam": "ipv4", 00:21:22.062 "trsvcid": "4420", 00:21:22.062 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:22.062 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:22.063 "hdgst": false, 00:21:22.063 "ddgst": false 00:21:22.063 }, 00:21:22.063 "method": "bdev_nvme_attach_controller" 00:21:22.063 },{ 00:21:22.063 "params": { 00:21:22.063 "name": "Nvme6", 00:21:22.063 "trtype": "tcp", 00:21:22.063 "traddr": "10.0.0.2", 00:21:22.063 "adrfam": "ipv4", 00:21:22.063 "trsvcid": "4420", 00:21:22.063 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:22.063 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:22.063 "hdgst": false, 00:21:22.063 "ddgst": false 00:21:22.063 }, 00:21:22.063 "method": "bdev_nvme_attach_controller" 00:21:22.063 },{ 00:21:22.063 "params": { 00:21:22.063 "name": "Nvme7", 00:21:22.063 "trtype": "tcp", 00:21:22.063 "traddr": "10.0.0.2", 00:21:22.063 "adrfam": "ipv4", 00:21:22.063 "trsvcid": "4420", 00:21:22.063 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:22.063 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:22.063 "hdgst": false, 00:21:22.063 "ddgst": false 00:21:22.063 }, 00:21:22.063 "method": "bdev_nvme_attach_controller" 00:21:22.063 },{ 00:21:22.063 "params": { 00:21:22.063 "name": "Nvme8", 00:21:22.063 "trtype": "tcp", 00:21:22.063 "traddr": "10.0.0.2", 00:21:22.063 "adrfam": "ipv4", 00:21:22.063 "trsvcid": "4420", 00:21:22.063 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:22.063 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:22.063 "hdgst": false, 00:21:22.063 "ddgst": false 00:21:22.063 }, 00:21:22.063 "method": "bdev_nvme_attach_controller" 00:21:22.063 },{ 00:21:22.063 "params": { 00:21:22.063 "name": "Nvme9", 00:21:22.063 "trtype": "tcp", 00:21:22.063 "traddr": "10.0.0.2", 00:21:22.063 "adrfam": "ipv4", 00:21:22.063 "trsvcid": "4420", 00:21:22.063 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:22.063 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:22.063 "hdgst": false, 00:21:22.063 "ddgst": false 00:21:22.063 }, 00:21:22.063 "method": "bdev_nvme_attach_controller" 00:21:22.063 },{ 00:21:22.063 "params": { 00:21:22.063 "name": "Nvme10", 00:21:22.063 "trtype": "tcp", 00:21:22.063 "traddr": "10.0.0.2", 00:21:22.063 "adrfam": "ipv4", 00:21:22.063 "trsvcid": "4420", 00:21:22.063 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:22.063 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:22.063 "hdgst": false, 00:21:22.063 "ddgst": false 00:21:22.063 }, 00:21:22.063 "method": "bdev_nvme_attach_controller" 00:21:22.063 }' 00:21:22.320 [2024-07-24 18:16:15.158339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.320 [2024-07-24 18:16:15.236992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3465896 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:23.688 18:16:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:24.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3465896 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3465611 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.618 { 00:21:24.618 "params": { 00:21:24.618 "name": "Nvme$subsystem", 00:21:24.618 "trtype": "$TEST_TRANSPORT", 00:21:24.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.618 "adrfam": "ipv4", 00:21:24.618 "trsvcid": "$NVMF_PORT", 00:21:24.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.618 "hdgst": ${hdgst:-false}, 00:21:24.618 "ddgst": ${ddgst:-false} 00:21:24.618 }, 00:21:24.618 "method": "bdev_nvme_attach_controller" 00:21:24.618 } 00:21:24.618 EOF 00:21:24.618 )") 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.618 { 00:21:24.618 "params": { 00:21:24.618 "name": "Nvme$subsystem", 00:21:24.618 "trtype": "$TEST_TRANSPORT", 00:21:24.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.618 "adrfam": "ipv4", 00:21:24.618 "trsvcid": "$NVMF_PORT", 00:21:24.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.618 "hdgst": ${hdgst:-false}, 00:21:24.618 "ddgst": ${ddgst:-false} 00:21:24.618 }, 00:21:24.618 "method": "bdev_nvme_attach_controller" 00:21:24.618 } 00:21:24.618 EOF 00:21:24.618 )") 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.618 { 00:21:24.618 "params": { 00:21:24.618 "name": "Nvme$subsystem", 00:21:24.618 "trtype": "$TEST_TRANSPORT", 00:21:24.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.618 "adrfam": "ipv4", 00:21:24.618 "trsvcid": "$NVMF_PORT", 00:21:24.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.618 "hdgst": ${hdgst:-false}, 00:21:24.618 "ddgst": ${ddgst:-false} 00:21:24.618 }, 00:21:24.618 "method": "bdev_nvme_attach_controller" 00:21:24.618 } 00:21:24.618 EOF 00:21:24.618 )") 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.618 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.618 { 00:21:24.618 "params": { 00:21:24.619 "name": "Nvme$subsystem", 00:21:24.619 "trtype": "$TEST_TRANSPORT", 00:21:24.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "$NVMF_PORT", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.619 "hdgst": ${hdgst:-false}, 00:21:24.619 "ddgst": ${ddgst:-false} 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 } 00:21:24.619 EOF 00:21:24.619 )") 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.619 { 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme$subsystem", 00:21:24.619 "trtype": "$TEST_TRANSPORT", 00:21:24.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "$NVMF_PORT", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.619 "hdgst": ${hdgst:-false}, 00:21:24.619 "ddgst": ${ddgst:-false} 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 } 00:21:24.619 EOF 00:21:24.619 )") 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.619 { 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme$subsystem", 00:21:24.619 "trtype": "$TEST_TRANSPORT", 00:21:24.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "$NVMF_PORT", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.619 "hdgst": ${hdgst:-false}, 00:21:24.619 "ddgst": ${ddgst:-false} 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 } 00:21:24.619 EOF 00:21:24.619 )") 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.619 { 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme$subsystem", 00:21:24.619 "trtype": "$TEST_TRANSPORT", 00:21:24.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "$NVMF_PORT", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.619 "hdgst": ${hdgst:-false}, 00:21:24.619 "ddgst": ${ddgst:-false} 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 } 00:21:24.619 EOF 00:21:24.619 )") 00:21:24.619 [2024-07-24 18:16:17.643529] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:24.619 [2024-07-24 18:16:17.643578] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466374 ] 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.619 { 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme$subsystem", 00:21:24.619 "trtype": "$TEST_TRANSPORT", 00:21:24.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "$NVMF_PORT", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.619 "hdgst": ${hdgst:-false}, 00:21:24.619 "ddgst": ${ddgst:-false} 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 } 00:21:24.619 EOF 00:21:24.619 )") 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.619 { 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme$subsystem", 00:21:24.619 "trtype": "$TEST_TRANSPORT", 00:21:24.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "$NVMF_PORT", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.619 "hdgst": ${hdgst:-false}, 00:21:24.619 "ddgst": ${ddgst:-false} 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 } 00:21:24.619 EOF 00:21:24.619 )") 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:24.619 { 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme$subsystem", 00:21:24.619 "trtype": "$TEST_TRANSPORT", 00:21:24.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "$NVMF_PORT", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.619 "hdgst": ${hdgst:-false}, 00:21:24.619 "ddgst": ${ddgst:-false} 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 } 00:21:24.619 EOF 00:21:24.619 )") 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:24.619 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:24.619 18:16:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme1", 00:21:24.619 "trtype": "tcp", 00:21:24.619 "traddr": "10.0.0.2", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "4420", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.619 "hdgst": false, 00:21:24.619 "ddgst": false 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 },{ 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme2", 00:21:24.619 "trtype": "tcp", 00:21:24.619 "traddr": "10.0.0.2", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "4420", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:24.619 "hdgst": false, 00:21:24.619 "ddgst": false 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 },{ 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme3", 00:21:24.619 "trtype": "tcp", 00:21:24.619 "traddr": "10.0.0.2", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "4420", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:24.619 "hdgst": false, 00:21:24.619 "ddgst": false 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 },{ 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme4", 00:21:24.619 "trtype": "tcp", 00:21:24.619 "traddr": "10.0.0.2", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "4420", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:24.619 "hdgst": false, 00:21:24.619 "ddgst": false 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 },{ 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme5", 00:21:24.619 "trtype": "tcp", 00:21:24.619 "traddr": "10.0.0.2", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "4420", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:24.619 "hdgst": false, 00:21:24.619 "ddgst": false 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 },{ 00:21:24.619 "params": { 00:21:24.619 "name": "Nvme6", 00:21:24.619 "trtype": "tcp", 00:21:24.619 "traddr": "10.0.0.2", 00:21:24.619 "adrfam": "ipv4", 00:21:24.619 "trsvcid": "4420", 00:21:24.619 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:24.619 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:24.619 "hdgst": false, 00:21:24.619 "ddgst": false 00:21:24.619 }, 00:21:24.619 "method": "bdev_nvme_attach_controller" 00:21:24.619 },{ 00:21:24.620 "params": { 00:21:24.620 "name": "Nvme7", 00:21:24.620 "trtype": "tcp", 00:21:24.620 "traddr": "10.0.0.2", 00:21:24.620 "adrfam": "ipv4", 00:21:24.620 "trsvcid": "4420", 00:21:24.620 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:24.620 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:24.620 "hdgst": false, 00:21:24.620 "ddgst": false 00:21:24.620 }, 00:21:24.620 "method": "bdev_nvme_attach_controller" 00:21:24.620 },{ 00:21:24.620 "params": { 00:21:24.620 "name": "Nvme8", 00:21:24.620 "trtype": "tcp", 00:21:24.620 "traddr": "10.0.0.2", 00:21:24.620 "adrfam": "ipv4", 00:21:24.620 "trsvcid": "4420", 00:21:24.620 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:24.620 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:24.620 "hdgst": false, 00:21:24.620 "ddgst": false 00:21:24.620 }, 00:21:24.620 "method": "bdev_nvme_attach_controller" 00:21:24.620 },{ 00:21:24.620 "params": { 00:21:24.620 "name": "Nvme9", 00:21:24.620 "trtype": "tcp", 00:21:24.620 "traddr": "10.0.0.2", 00:21:24.620 "adrfam": "ipv4", 00:21:24.620 "trsvcid": "4420", 00:21:24.620 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:24.620 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:24.620 "hdgst": false, 00:21:24.620 "ddgst": false 00:21:24.620 }, 00:21:24.620 "method": "bdev_nvme_attach_controller" 00:21:24.620 },{ 00:21:24.620 "params": { 00:21:24.620 "name": "Nvme10", 00:21:24.620 "trtype": "tcp", 00:21:24.620 "traddr": "10.0.0.2", 00:21:24.620 "adrfam": "ipv4", 00:21:24.620 "trsvcid": "4420", 00:21:24.620 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:24.620 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:24.620 "hdgst": false, 00:21:24.620 "ddgst": false 00:21:24.620 }, 00:21:24.620 "method": "bdev_nvme_attach_controller" 00:21:24.620 }' 00:21:24.620 [2024-07-24 18:16:17.700710] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.877 [2024-07-24 18:16:17.774463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.246 Running I/O for 1 seconds... 00:21:27.615 00:21:27.615 Latency(us) 00:21:27.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.615 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme1n1 : 1.11 292.73 18.30 0.00 0.00 216542.97 2574.63 212711.13 00:21:27.615 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme2n1 : 1.03 248.94 15.56 0.00 0.00 251013.36 27587.54 218702.99 00:21:27.615 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme3n1 : 1.10 292.20 18.26 0.00 0.00 210975.01 13544.11 213709.78 00:21:27.615 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme4n1 : 1.10 295.41 18.46 0.00 0.00 205601.09 14667.58 203723.34 00:21:27.615 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme5n1 : 1.12 286.56 17.91 0.00 0.00 209202.61 15978.30 212711.13 00:21:27.615 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme6n1 : 1.12 285.81 17.86 0.00 0.00 206709.91 19099.06 211712.49 00:21:27.615 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme7n1 : 1.10 289.84 18.11 0.00 0.00 200489.84 16976.94 224694.86 00:21:27.615 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme8n1 : 1.12 284.66 17.79 0.00 0.00 201498.43 13294.45 216705.71 00:21:27.615 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme9n1 : 1.15 329.66 20.60 0.00 0.00 171660.79 4774.77 231685.36 00:21:27.615 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:27.615 Verification LBA range: start 0x0 length 0x400 00:21:27.615 Nvme10n1 : 1.14 279.93 17.50 0.00 0.00 199248.85 15541.39 216705.71 00:21:27.615 =================================================================================================================== 00:21:27.616 Total : 2885.73 180.36 0.00 0.00 205766.50 2574.63 231685.36 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:27.616 rmmod nvme_tcp 00:21:27.616 rmmod nvme_fabrics 00:21:27.616 rmmod nvme_keyring 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3465611 ']' 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3465611 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 3465611 ']' 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 3465611 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3465611 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3465611' 00:21:27.616 killing process with pid 3465611 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 3465611 00:21:27.616 18:16:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 3465611 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.181 18:16:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.080 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:30.080 00:21:30.080 real 0m15.108s 00:21:30.080 user 0m34.361s 00:21:30.080 sys 0m5.572s 00:21:30.080 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:30.080 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:30.080 ************************************ 00:21:30.080 END TEST nvmf_shutdown_tc1 00:21:30.080 ************************************ 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:30.338 ************************************ 00:21:30.338 START TEST nvmf_shutdown_tc2 00:21:30.338 ************************************ 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:30.338 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:30.339 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:30.339 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:30.339 Found net devices under 0000:86:00.0: cvl_0_0 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:30.339 Found net devices under 0000:86:00.1: cvl_0_1 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:30.339 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:30.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:30.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:30.597 00:21:30.597 --- 10.0.0.2 ping statistics --- 00:21:30.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.597 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:30.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:30.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:21:30.597 00:21:30.597 --- 10.0.0.1 ping statistics --- 00:21:30.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:30.597 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3467402 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3467402 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3467402 ']' 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:30.597 18:16:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:30.597 [2024-07-24 18:16:23.517098] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:30.597 [2024-07-24 18:16:23.517140] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:30.597 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.597 [2024-07-24 18:16:23.576280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:30.597 [2024-07-24 18:16:23.655611] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:30.597 [2024-07-24 18:16:23.655648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:30.597 [2024-07-24 18:16:23.655654] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:30.597 [2024-07-24 18:16:23.655671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:30.597 [2024-07-24 18:16:23.655676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:30.597 [2024-07-24 18:16:23.655775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.597 [2024-07-24 18:16:23.655795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:30.597 [2024-07-24 18:16:23.655901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.597 [2024-07-24 18:16:23.655902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.528 [2024-07-24 18:16:24.359698] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.528 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.529 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:31.529 Malloc1 00:21:31.529 [2024-07-24 18:16:24.455150] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.529 Malloc2 00:21:31.529 Malloc3 00:21:31.529 Malloc4 00:21:31.529 Malloc5 00:21:31.786 Malloc6 00:21:31.786 Malloc7 00:21:31.786 Malloc8 00:21:31.786 Malloc9 00:21:31.786 Malloc10 00:21:31.786 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.786 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:31.786 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:31.786 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3467681 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3467681 /var/tmp/bdevperf.sock 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3467681 ']' 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:32.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 [2024-07-24 18:16:24.915612] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:32.066 [2024-07-24 18:16:24.915661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3467681 ] 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.066 "trsvcid": "$NVMF_PORT", 00:21:32.066 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.066 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.066 "hdgst": ${hdgst:-false}, 00:21:32.066 "ddgst": ${ddgst:-false} 00:21:32.066 }, 00:21:32.066 "method": "bdev_nvme_attach_controller" 00:21:32.066 } 00:21:32.066 EOF 00:21:32.066 )") 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.066 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.066 { 00:21:32.066 "params": { 00:21:32.066 "name": "Nvme$subsystem", 00:21:32.066 "trtype": "$TEST_TRANSPORT", 00:21:32.066 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.066 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "$NVMF_PORT", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.067 "hdgst": ${hdgst:-false}, 00:21:32.067 "ddgst": ${ddgst:-false} 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 } 00:21:32.067 EOF 00:21:32.067 )") 00:21:32.067 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.067 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:32.067 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:32.067 { 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme$subsystem", 00:21:32.067 "trtype": "$TEST_TRANSPORT", 00:21:32.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "$NVMF_PORT", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:32.067 "hdgst": ${hdgst:-false}, 00:21:32.067 "ddgst": ${ddgst:-false} 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 } 00:21:32.067 EOF 00:21:32.067 )") 00:21:32.067 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:32.067 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.067 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:32.067 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:32.067 18:16:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme1", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme2", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme3", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme4", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme5", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme6", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme7", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme8", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme9", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 },{ 00:21:32.067 "params": { 00:21:32.067 "name": "Nvme10", 00:21:32.067 "trtype": "tcp", 00:21:32.067 "traddr": "10.0.0.2", 00:21:32.067 "adrfam": "ipv4", 00:21:32.067 "trsvcid": "4420", 00:21:32.067 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:32.067 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:32.067 "hdgst": false, 00:21:32.067 "ddgst": false 00:21:32.067 }, 00:21:32.067 "method": "bdev_nvme_attach_controller" 00:21:32.067 }' 00:21:32.067 [2024-07-24 18:16:24.973369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:32.067 [2024-07-24 18:16:25.046454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.450 Running I/O for 10 seconds... 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:33.450 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:33.451 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:33.451 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.451 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.451 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.451 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:33.451 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:33.451 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:33.708 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:33.708 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:33.708 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:33.708 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:33.708 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:33.708 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.965 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:33.965 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=83 00:21:33.966 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 83 -ge 100 ']' 00:21:33.966 18:16:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3467681 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3467681 ']' 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3467681 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467681 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:34.223 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467681' 00:21:34.224 killing process with pid 3467681 00:21:34.224 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3467681 00:21:34.224 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3467681 00:21:34.224 Received shutdown signal, test time was about 0.959087 seconds 00:21:34.224 00:21:34.224 Latency(us) 00:21:34.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.224 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme1n1 : 0.90 283.57 17.72 0.00 0.00 223426.56 14667.58 213709.78 00:21:34.224 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme2n1 : 0.91 280.50 17.53 0.00 0.00 222031.24 27962.03 208716.56 00:21:34.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme3n1 : 0.89 292.89 18.31 0.00 0.00 207063.51 7833.11 185747.75 00:21:34.224 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme4n1 : 0.88 289.66 18.10 0.00 0.00 207152.03 14105.84 203723.34 00:21:34.224 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme5n1 : 0.91 281.65 17.60 0.00 0.00 209566.72 20222.54 197731.47 00:21:34.224 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme6n1 : 0.90 285.24 17.83 0.00 0.00 202676.66 19473.55 203723.34 00:21:34.224 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme7n1 : 0.91 282.75 17.67 0.00 0.00 201040.58 13731.35 212711.13 00:21:34.224 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme8n1 : 0.91 279.92 17.50 0.00 0.00 199553.83 13419.28 217704.35 00:21:34.224 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme9n1 : 0.92 278.28 17.39 0.00 0.00 197164.13 16852.11 219701.64 00:21:34.224 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:34.224 Verification LBA range: start 0x0 length 0x400 00:21:34.224 Nvme10n1 : 0.96 267.10 16.69 0.00 0.00 193700.57 17476.27 236678.58 00:21:34.224 =================================================================================================================== 00:21:34.224 Total : 2821.55 176.35 0.00 0.00 206338.72 7833.11 236678.58 00:21:34.481 18:16:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:35.411 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3467402 00:21:35.411 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:35.411 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:35.668 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.669 rmmod nvme_tcp 00:21:35.669 rmmod nvme_fabrics 00:21:35.669 rmmod nvme_keyring 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3467402 ']' 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3467402 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 3467402 ']' 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 3467402 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3467402 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3467402' 00:21:35.669 killing process with pid 3467402 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 3467402 00:21:35.669 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 3467402 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:35.927 18:16:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:38.457 00:21:38.457 real 0m7.859s 00:21:38.457 user 0m23.729s 00:21:38.457 sys 0m1.342s 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.457 ************************************ 00:21:38.457 END TEST nvmf_shutdown_tc2 00:21:38.457 ************************************ 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:38.457 ************************************ 00:21:38.457 START TEST nvmf_shutdown_tc3 00:21:38.457 ************************************ 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.457 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:38.458 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:38.458 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:38.458 Found net devices under 0000:86:00.0: cvl_0_0 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:38.458 Found net devices under 0000:86:00.1: cvl_0_1 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:38.458 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:38.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:38.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:21:38.459 00:21:38.459 --- 10.0.0.2 ping statistics --- 00:21:38.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.459 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:38.459 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:38.459 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:21:38.459 00:21:38.459 --- 10.0.0.1 ping statistics --- 00:21:38.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:38.459 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3468951 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3468951 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3468951 ']' 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.459 18:16:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:38.459 [2024-07-24 18:16:31.506467] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:38.459 [2024-07-24 18:16:31.506515] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.459 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.717 [2024-07-24 18:16:31.564390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:38.717 [2024-07-24 18:16:31.636283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.717 [2024-07-24 18:16:31.636324] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.717 [2024-07-24 18:16:31.636331] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.717 [2024-07-24 18:16:31.636337] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.717 [2024-07-24 18:16:31.636342] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.717 [2024-07-24 18:16:31.636443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.717 [2024-07-24 18:16:31.636517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:38.717 [2024-07-24 18:16:31.636627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.717 [2024-07-24 18:16:31.636628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.280 [2024-07-24 18:16:32.330634] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.280 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.537 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.537 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.537 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.537 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:39.538 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.538 Malloc1 00:21:39.538 [2024-07-24 18:16:32.426194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.538 Malloc2 00:21:39.538 Malloc3 00:21:39.538 Malloc4 00:21:39.538 Malloc5 00:21:39.538 Malloc6 00:21:39.795 Malloc7 00:21:39.795 Malloc8 00:21:39.795 Malloc9 00:21:39.795 Malloc10 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3469227 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3469227 /var/tmp/bdevperf.sock 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 3469227 ']' 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.795 { 00:21:39.795 "params": { 00:21:39.795 "name": "Nvme$subsystem", 00:21:39.795 "trtype": "$TEST_TRANSPORT", 00:21:39.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.795 "adrfam": "ipv4", 00:21:39.795 "trsvcid": "$NVMF_PORT", 00:21:39.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.795 "hdgst": ${hdgst:-false}, 00:21:39.795 "ddgst": ${ddgst:-false} 00:21:39.795 }, 00:21:39.795 "method": "bdev_nvme_attach_controller" 00:21:39.795 } 00:21:39.795 EOF 00:21:39.795 )") 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.795 { 00:21:39.795 "params": { 00:21:39.795 "name": "Nvme$subsystem", 00:21:39.795 "trtype": "$TEST_TRANSPORT", 00:21:39.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.795 "adrfam": "ipv4", 00:21:39.795 "trsvcid": "$NVMF_PORT", 00:21:39.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.795 "hdgst": ${hdgst:-false}, 00:21:39.795 "ddgst": ${ddgst:-false} 00:21:39.795 }, 00:21:39.795 "method": "bdev_nvme_attach_controller" 00:21:39.795 } 00:21:39.795 EOF 00:21:39.795 )") 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.795 { 00:21:39.795 "params": { 00:21:39.795 "name": "Nvme$subsystem", 00:21:39.795 "trtype": "$TEST_TRANSPORT", 00:21:39.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.795 "adrfam": "ipv4", 00:21:39.795 "trsvcid": "$NVMF_PORT", 00:21:39.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.795 "hdgst": ${hdgst:-false}, 00:21:39.795 "ddgst": ${ddgst:-false} 00:21:39.795 }, 00:21:39.795 "method": "bdev_nvme_attach_controller" 00:21:39.795 } 00:21:39.795 EOF 00:21:39.795 )") 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:39.795 { 00:21:39.795 "params": { 00:21:39.795 "name": "Nvme$subsystem", 00:21:39.795 "trtype": "$TEST_TRANSPORT", 00:21:39.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:39.795 "adrfam": "ipv4", 00:21:39.795 "trsvcid": "$NVMF_PORT", 00:21:39.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:39.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:39.795 "hdgst": ${hdgst:-false}, 00:21:39.795 "ddgst": ${ddgst:-false} 00:21:39.795 }, 00:21:39.795 "method": "bdev_nvme_attach_controller" 00:21:39.795 } 00:21:39.795 EOF 00:21:39.795 )") 00:21:39.795 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.052 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.052 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.052 { 00:21:40.052 "params": { 00:21:40.052 "name": "Nvme$subsystem", 00:21:40.052 "trtype": "$TEST_TRANSPORT", 00:21:40.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.052 "adrfam": "ipv4", 00:21:40.052 "trsvcid": "$NVMF_PORT", 00:21:40.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.052 "hdgst": ${hdgst:-false}, 00:21:40.052 "ddgst": ${ddgst:-false} 00:21:40.052 }, 00:21:40.052 "method": "bdev_nvme_attach_controller" 00:21:40.052 } 00:21:40.052 EOF 00:21:40.052 )") 00:21:40.052 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.052 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.052 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.053 { 00:21:40.053 "params": { 00:21:40.053 "name": "Nvme$subsystem", 00:21:40.053 "trtype": "$TEST_TRANSPORT", 00:21:40.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.053 "adrfam": "ipv4", 00:21:40.053 "trsvcid": "$NVMF_PORT", 00:21:40.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.053 "hdgst": ${hdgst:-false}, 00:21:40.053 "ddgst": ${ddgst:-false} 00:21:40.053 }, 00:21:40.053 "method": "bdev_nvme_attach_controller" 00:21:40.053 } 00:21:40.053 EOF 00:21:40.053 )") 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.053 { 00:21:40.053 "params": { 00:21:40.053 "name": "Nvme$subsystem", 00:21:40.053 "trtype": "$TEST_TRANSPORT", 00:21:40.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.053 "adrfam": "ipv4", 00:21:40.053 "trsvcid": "$NVMF_PORT", 00:21:40.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.053 "hdgst": ${hdgst:-false}, 00:21:40.053 "ddgst": ${ddgst:-false} 00:21:40.053 }, 00:21:40.053 "method": "bdev_nvme_attach_controller" 00:21:40.053 } 00:21:40.053 EOF 00:21:40.053 )") 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.053 [2024-07-24 18:16:32.896632] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:40.053 [2024-07-24 18:16:32.896682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3469227 ] 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.053 { 00:21:40.053 "params": { 00:21:40.053 "name": "Nvme$subsystem", 00:21:40.053 "trtype": "$TEST_TRANSPORT", 00:21:40.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.053 "adrfam": "ipv4", 00:21:40.053 "trsvcid": "$NVMF_PORT", 00:21:40.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.053 "hdgst": ${hdgst:-false}, 00:21:40.053 "ddgst": ${ddgst:-false} 00:21:40.053 }, 00:21:40.053 "method": "bdev_nvme_attach_controller" 00:21:40.053 } 00:21:40.053 EOF 00:21:40.053 )") 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.053 { 00:21:40.053 "params": { 00:21:40.053 "name": "Nvme$subsystem", 00:21:40.053 "trtype": "$TEST_TRANSPORT", 00:21:40.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.053 "adrfam": "ipv4", 00:21:40.053 "trsvcid": "$NVMF_PORT", 00:21:40.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.053 "hdgst": ${hdgst:-false}, 00:21:40.053 "ddgst": ${ddgst:-false} 00:21:40.053 }, 00:21:40.053 "method": "bdev_nvme_attach_controller" 00:21:40.053 } 00:21:40.053 EOF 00:21:40.053 )") 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:40.053 { 00:21:40.053 "params": { 00:21:40.053 "name": "Nvme$subsystem", 00:21:40.053 "trtype": "$TEST_TRANSPORT", 00:21:40.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.053 "adrfam": "ipv4", 00:21:40.053 "trsvcid": "$NVMF_PORT", 00:21:40.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.053 "hdgst": ${hdgst:-false}, 00:21:40.053 "ddgst": ${ddgst:-false} 00:21:40.053 }, 00:21:40.053 "method": "bdev_nvme_attach_controller" 00:21:40.053 } 00:21:40.053 EOF 00:21:40.053 )") 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:40.053 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:40.053 18:16:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:40.053 "params": { 00:21:40.053 "name": "Nvme1", 00:21:40.053 "trtype": "tcp", 00:21:40.053 "traddr": "10.0.0.2", 00:21:40.053 "adrfam": "ipv4", 00:21:40.053 "trsvcid": "4420", 00:21:40.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.053 "hdgst": false, 00:21:40.053 "ddgst": false 00:21:40.053 }, 00:21:40.053 "method": "bdev_nvme_attach_controller" 00:21:40.053 },{ 00:21:40.053 "params": { 00:21:40.053 "name": "Nvme2", 00:21:40.053 "trtype": "tcp", 00:21:40.053 "traddr": "10.0.0.2", 00:21:40.053 "adrfam": "ipv4", 00:21:40.053 "trsvcid": "4420", 00:21:40.053 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.053 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:40.053 "hdgst": false, 00:21:40.053 "ddgst": false 00:21:40.053 }, 00:21:40.053 "method": "bdev_nvme_attach_controller" 00:21:40.053 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme3", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme4", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme5", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme6", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme7", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme8", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme9", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 },{ 00:21:40.054 "params": { 00:21:40.054 "name": "Nvme10", 00:21:40.054 "trtype": "tcp", 00:21:40.054 "traddr": "10.0.0.2", 00:21:40.054 "adrfam": "ipv4", 00:21:40.054 "trsvcid": "4420", 00:21:40.054 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:40.054 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:40.054 "hdgst": false, 00:21:40.054 "ddgst": false 00:21:40.054 }, 00:21:40.054 "method": "bdev_nvme_attach_controller" 00:21:40.054 }' 00:21:40.054 [2024-07-24 18:16:32.952948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.054 [2024-07-24 18:16:33.026079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.423 Running I/O for 10 seconds... 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:41.423 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=84 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 84 -ge 100 ']' 00:21:41.681 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:41.938 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:41.938 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3468951 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 3468951 ']' 00:21:41.939 18:16:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 3468951 00:21:41.939 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:21:41.939 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:41.939 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3468951 00:21:42.209 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.209 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.209 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3468951' 00:21:42.209 killing process with pid 3468951 00:21:42.209 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 3468951 00:21:42.209 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 3468951 00:21:42.209 [2024-07-24 18:16:35.047556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047693] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.209 [2024-07-24 18:16:35.047724] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047883] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047906] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047912] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047952] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047969] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047976] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.047999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.048005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.048011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4180 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049577] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.210 [2024-07-24 18:16:35.049668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049687] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049752] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049824] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049866] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.049872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4640 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050903] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050978] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.050995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051014] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051062] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051074] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.211 [2024-07-24 18:16:35.051143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.051148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.051154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.051159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.051164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.051170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.051175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.051181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4b00 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052070] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052139] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052274] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.052423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe4fe0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054192] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.212 [2024-07-24 18:16:35.054227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-24 18:16:35.054316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 [2024-07-24 18:16:35.054344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.213 [2024-07-24 18:16:35.054350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 18:16:35.054357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.213 [2024-07-24 18:16:35.054371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 [2024-07-24 18:16:35.054378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.213 [2024-07-24 18:16:35.054384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 [2024-07-24 18:16:35.054391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379ab0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-24 18:16:35.054441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 [2024-07-24 18:16:35.054455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-24 18:16:35.054463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 18:16:35.054471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.213 [2024-07-24 18:16:35.054488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 [2024-07-24 18:16:35.054495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-24 18:16:35.054502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-24 18:16:35.054511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3bc0 is same [2024-07-24 18:16:35.054521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with with the state(5) to be set 00:21:42.213 the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054536] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with [2024-07-24 18:16:35.054554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:21:42.213 id:0 cdw10:00000000 cdw11:00000000 00:21:42.213 [2024-07-24 18:16:35.054563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 [2024-07-24 18:16:35.054570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with [2024-07-24 18:16:35.054573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:21:42.213 id:0 cdw10:00000000 cdw11:00000000 00:21:42.213 [2024-07-24 18:16:35.054580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.213 [2024-07-24 18:16:35.054586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.213 [2024-07-24 18:16:35.054589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.213 [2024-07-24 18:16:35.054593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054614] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2388700 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe54a0 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2f340 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220cf30 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2210b90 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ac910 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.054953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.054991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.054998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.214 [2024-07-24 18:16:35.055004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.055010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0c70 is same with the state(5) to be set 00:21:42.214 [2024-07-24 18:16:35.055427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.214 [2024-07-24 18:16:35.055449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.055463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.214 [2024-07-24 18:16:35.055471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.214 [2024-07-24 18:16:35.055480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1[2024-07-24 18:16:35.055583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 18:16:35.055591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055691] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1[2024-07-24 18:16:35.055705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 18:16:35.055713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with [2024-07-24 18:16:35.055758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:1the state(5) to be set 00:21:42.215 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.215 [2024-07-24 18:16:35.055805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.215 [2024-07-24 18:16:35.055811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.215 [2024-07-24 18:16:35.055814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:1[2024-07-24 18:16:35.055828] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 18:16:35.055837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with [2024-07-24 18:16:35.055881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:1the state(5) to be set 00:21:42.216 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055905] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055933] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055939] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.055979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055986] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with [2024-07-24 18:16:35.055986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(5) to be set 00:21:42.216 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.055994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.055995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056001] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.056007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-07-24 18:16:35.056008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.056016] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with [2024-07-24 18:16:35.056016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:21:42.216 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.056028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fe5980 is same with the state(5) to be set 00:21:42.216 [2024-07-24 18:16:35.056035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.216 [2024-07-24 18:16:35.056221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.216 [2024-07-24 18:16:35.056228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.056484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.056511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.217 [2024-07-24 18:16:35.056563] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21db200 was disconnected and freed. reset controller. 00:21:42.217 [2024-07-24 18:16:35.057370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.217 [2024-07-24 18:16:35.057699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.217 [2024-07-24 18:16:35.057707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.057942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.057951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.218 [2024-07-24 18:16:35.071580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.218 [2024-07-24 18:16:35.071589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.071600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.071609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.071620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.071628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.071639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.071648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.071682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:21:42.219 [2024-07-24 18:16:35.071744] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c2910 was disconnected and freed. reset controller. 00:21:42.219 [2024-07-24 18:16:35.071886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2379ab0 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.071933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.071945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.071956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.071966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.071976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.071985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.071995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.072004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.072013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a840 is same with the state(5) to be set 00:21:42.219 [2024-07-24 18:16:35.072027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a3bc0 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.072060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.072072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.072081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.072090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.072100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.072109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.072118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:42.219 [2024-07-24 18:16:35.072127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.072136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239cf60 is same with the state(5) to be set 00:21:42.219 [2024-07-24 18:16:35.072154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2388700 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.072174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2f340 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.072189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220cf30 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.072207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2210b90 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.072225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac910 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.072244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e0c70 (9): Bad file descriptor 00:21:42.219 [2024-07-24 18:16:35.073724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.073985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.073996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.074013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.074024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.074033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.074044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.074054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.219 [2024-07-24 18:16:35.074065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.219 [2024-07-24 18:16:35.074074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.220 [2024-07-24 18:16:35.074874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.220 [2024-07-24 18:16:35.074885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.074894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.074906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.074915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.074926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.074936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.074947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.074956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.074968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.074976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.074987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.074996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.075008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.075017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.075028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.075039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.075051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.075060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.075133] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21dc6b0 was disconnected and freed. reset controller. 00:21:42.221 [2024-07-24 18:16:35.076737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:42.221 [2024-07-24 18:16:35.080565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:42.221 [2024-07-24 18:16:35.080604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237a840 (9): Bad file descriptor 00:21:42.221 [2024-07-24 18:16:35.080930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.221 [2024-07-24 18:16:35.080953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2f340 with addr=10.0.0.2, port=4420 00:21:42.221 [2024-07-24 18:16:35.080965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2f340 is same with the state(5) to be set 00:21:42.221 [2024-07-24 18:16:35.082489] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x22c3df0 was disconnected and freed. reset controller. 00:21:42.221 [2024-07-24 18:16:35.082532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:42.221 [2024-07-24 18:16:35.082584] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2f340 (9): Bad file descriptor 00:21:42.221 [2024-07-24 18:16:35.082619] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.221 [2024-07-24 18:16:35.082646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239cf60 (9): Bad file descriptor 00:21:42.221 [2024-07-24 18:16:35.082747] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.221 [2024-07-24 18:16:35.082807] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.221 [2024-07-24 18:16:35.082860] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.221 [2024-07-24 18:16:35.082919] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.221 [2024-07-24 18:16:35.083320] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.221 [2024-07-24 18:16:35.083382] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:42.221 [2024-07-24 18:16:35.083771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:42.221 [2024-07-24 18:16:35.084027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.221 [2024-07-24 18:16:35.084049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237a840 with addr=10.0.0.2, port=4420 00:21:42.221 [2024-07-24 18:16:35.084061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a840 is same with the state(5) to be set 00:21:42.221 [2024-07-24 18:16:35.084294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.221 [2024-07-24 18:16:35.084310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2388700 with addr=10.0.0.2, port=4420 00:21:42.221 [2024-07-24 18:16:35.084322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2388700 is same with the state(5) to be set 00:21:42.221 [2024-07-24 18:16:35.084334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:42.221 [2024-07-24 18:16:35.084345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:42.221 [2024-07-24 18:16:35.084357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:42.221 [2024-07-24 18:16:35.084433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.221 [2024-07-24 18:16:35.084911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.221 [2024-07-24 18:16:35.084924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.084935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.084948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.084959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.084972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.084982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.084996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.222 [2024-07-24 18:16:35.085865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.222 [2024-07-24 18:16:35.085875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.085898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.085905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.085915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.085923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.085932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.085939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.085949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.085957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.085967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.085974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.085982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a3610 is same with the state(5) to be set 00:21:42.223 [2024-07-24 18:16:35.087136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.223 [2024-07-24 18:16:35.087634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.223 [2024-07-24 18:16:35.087642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.087984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.087992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.088269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.088278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22a49a0 is same with the state(5) to be set 00:21:42.224 [2024-07-24 18:16:35.089405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.089418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.089430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.089438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.224 [2024-07-24 18:16:35.089447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.224 [2024-07-24 18:16:35.089455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.089987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.089998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.225 [2024-07-24 18:16:35.090168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.225 [2024-07-24 18:16:35.090177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.090185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.090195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.090202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.090210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2348580 is same with the state(5) to be set 00:21:42.226 [2024-07-24 18:16:35.091268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.226 [2024-07-24 18:16:35.091922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.226 [2024-07-24 18:16:35.091929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.091939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.091947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.091956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.091964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.091973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.091981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.091991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.092419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.092427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2349ac0 is same with the state(5) to be set 00:21:42.227 [2024-07-24 18:16:35.093615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.093646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.093664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.093683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.093700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.093718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.093735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.227 [2024-07-24 18:16:35.093753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.227 [2024-07-24 18:16:35.093762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.093986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.093993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.094145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.094152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.228 [2024-07-24 18:16:35.101199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.228 [2024-07-24 18:16:35.101206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.229 [2024-07-24 18:16:35.101608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.229 [2024-07-24 18:16:35.101615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b0e040 is same with the state(5) to be set 00:21:42.229 [2024-07-24 18:16:35.102617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.229 [2024-07-24 18:16:35.102633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:42.229 [2024-07-24 18:16:35.102644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:42.229 [2024-07-24 18:16:35.102652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:42.229 [2024-07-24 18:16:35.102864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.229 [2024-07-24 18:16:35.102878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2379ab0 with addr=10.0.0.2, port=4420 00:21:42.229 [2024-07-24 18:16:35.102886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379ab0 is same with the state(5) to be set 00:21:42.229 [2024-07-24 18:16:35.102899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237a840 (9): Bad file descriptor 00:21:42.229 [2024-07-24 18:16:35.102909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2388700 (9): Bad file descriptor 00:21:42.229 [2024-07-24 18:16:35.102941] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.229 [2024-07-24 18:16:35.102961] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.229 [2024-07-24 18:16:35.102972] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.229 [2024-07-24 18:16:35.102985] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.229 [2024-07-24 18:16:35.102995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2379ab0 (9): Bad file descriptor 00:21:42.229 [2024-07-24 18:16:35.103071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:42.229 [2024-07-24 18:16:35.103083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:42.229 [2024-07-24 18:16:35.103251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.229 [2024-07-24 18:16:35.103263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21e0c70 with addr=10.0.0.2, port=4420 00:21:42.229 [2024-07-24 18:16:35.103270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21e0c70 is same with the state(5) to be set 00:21:42.229 [2024-07-24 18:16:35.103445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.229 [2024-07-24 18:16:35.103455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23ac910 with addr=10.0.0.2, port=4420 00:21:42.229 [2024-07-24 18:16:35.103462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ac910 is same with the state(5) to be set 00:21:42.229 [2024-07-24 18:16:35.103556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.229 [2024-07-24 18:16:35.103566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220cf30 with addr=10.0.0.2, port=4420 00:21:42.229 [2024-07-24 18:16:35.103574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220cf30 is same with the state(5) to be set 00:21:42.229 [2024-07-24 18:16:35.103583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:42.229 [2024-07-24 18:16:35.103594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:42.229 [2024-07-24 18:16:35.103601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:42.230 [2024-07-24 18:16:35.103614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:42.230 [2024-07-24 18:16:35.103621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:42.230 [2024-07-24 18:16:35.103627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:42.230 [2024-07-24 18:16:35.104771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.104990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.104997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.230 [2024-07-24 18:16:35.105340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.230 [2024-07-24 18:16:35.105348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.231 [2024-07-24 18:16:35.105792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:42.231 [2024-07-24 18:16:35.105799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2cb5ab0 is same with the state(5) to be set 00:21:42.231 [2024-07-24 18:16:35.107259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:42.231 [2024-07-24 18:16:35.107284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.231 [2024-07-24 18:16:35.107294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.231 task offset: 27904 on job bdev=Nvme5n1 fails 00:21:42.231 00:21:42.231 Latency(us) 00:21:42.231 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.231 Job: Nvme1n1 ended in about 0.89 seconds with error 00:21:42.231 Verification LBA range: start 0x0 length 0x400 00:21:42.231 Nvme1n1 : 0.89 220.39 13.77 71.60 0.00 217049.83 17975.59 210713.84 00:21:42.231 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.231 Job: Nvme2n1 ended in about 0.90 seconds with error 00:21:42.231 Verification LBA range: start 0x0 length 0x400 00:21:42.231 Nvme2n1 : 0.90 218.71 13.67 71.42 0.00 214644.56 7177.75 207717.91 00:21:42.231 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.231 Job: Nvme3n1 ended in about 0.90 seconds with error 00:21:42.231 Verification LBA range: start 0x0 length 0x400 00:21:42.231 Nvme3n1 : 0.90 236.08 14.76 49.00 0.00 213616.15 17476.27 195734.19 00:21:42.231 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.231 Job: Nvme4n1 ended in about 0.90 seconds with error 00:21:42.231 Verification LBA range: start 0x0 length 0x400 00:21:42.231 Nvme4n1 : 0.90 213.26 13.33 71.09 0.00 211369.69 16352.79 210713.84 00:21:42.231 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.231 Job: Nvme5n1 ended in about 0.88 seconds with error 00:21:42.231 Verification LBA range: start 0x0 length 0x400 00:21:42.231 Nvme5n1 : 0.88 218.07 13.63 72.69 0.00 202585.97 16727.28 218702.99 00:21:42.231 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.231 Job: Nvme6n1 ended in about 0.89 seconds with error 00:21:42.231 Verification LBA range: start 0x0 length 0x400 00:21:42.231 Nvme6n1 : 0.89 216.89 13.56 72.30 0.00 199917.23 17975.59 211712.49 00:21:42.231 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.231 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:42.231 Verification LBA range: start 0x0 length 0x400 00:21:42.231 Nvme7n1 : 0.91 211.13 13.20 70.38 0.00 202089.33 14230.67 229688.08 00:21:42.231 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.232 Job: Nvme8n1 ended in about 0.91 seconds with error 00:21:42.232 Verification LBA range: start 0x0 length 0x400 00:21:42.232 Nvme8n1 : 0.91 215.63 13.48 70.05 0.00 195480.34 13419.28 206719.27 00:21:42.232 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.232 Job: Nvme9n1 ended in about 0.88 seconds with error 00:21:42.232 Verification LBA range: start 0x0 length 0x400 00:21:42.232 Nvme9n1 : 0.88 217.40 13.59 72.47 0.00 187854.75 18225.25 218702.99 00:21:42.232 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:42.232 Verification LBA range: start 0x0 length 0x400 00:21:42.232 Nvme10n1 : 0.89 216.43 13.53 0.00 0.00 246798.30 18100.42 235679.94 00:21:42.232 =================================================================================================================== 00:21:42.232 Total : 2184.01 136.50 620.99 0.00 208177.73 7177.75 235679.94 00:21:42.232 [2024-07-24 18:16:35.132902] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:42.232 [2024-07-24 18:16:35.132937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:42.232 [2024-07-24 18:16:35.133239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.232 [2024-07-24 18:16:35.133257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2210b90 with addr=10.0.0.2, port=4420 00:21:42.232 [2024-07-24 18:16:35.133267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2210b90 is same with the state(5) to be set 00:21:42.232 [2024-07-24 18:16:35.133474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.232 [2024-07-24 18:16:35.133486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23a3bc0 with addr=10.0.0.2, port=4420 00:21:42.232 [2024-07-24 18:16:35.133500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a3bc0 is same with the state(5) to be set 00:21:42.232 [2024-07-24 18:16:35.133513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21e0c70 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.133526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ac910 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.133536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220cf30 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.133544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.133550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.133561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:42.232 [2024-07-24 18:16:35.133907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.134087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.232 [2024-07-24 18:16:35.134101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d2f340 with addr=10.0.0.2, port=4420 00:21:42.232 [2024-07-24 18:16:35.134110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d2f340 is same with the state(5) to be set 00:21:42.232 [2024-07-24 18:16:35.134241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.232 [2024-07-24 18:16:35.134252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x239cf60 with addr=10.0.0.2, port=4420 00:21:42.232 [2024-07-24 18:16:35.134259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x239cf60 is same with the state(5) to be set 00:21:42.232 [2024-07-24 18:16:35.134270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2210b90 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.134280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a3bc0 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.134293] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.134300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.134308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:42.232 [2024-07-24 18:16:35.134318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.134324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.134331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:42.232 [2024-07-24 18:16:35.134340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.134347] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.134353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:42.232 [2024-07-24 18:16:35.134397] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.232 [2024-07-24 18:16:35.134408] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.232 [2024-07-24 18:16:35.134418] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.232 [2024-07-24 18:16:35.134428] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.232 [2024-07-24 18:16:35.134438] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:42.232 [2024-07-24 18:16:35.134732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.134743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.134749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.134765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d2f340 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.134775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x239cf60 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.134783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.134789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.134796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:42.232 [2024-07-24 18:16:35.134804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.134810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.134817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:42.232 [2024-07-24 18:16:35.134857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:42.232 [2024-07-24 18:16:35.134868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:42.232 [2024-07-24 18:16:35.134876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:42.232 [2024-07-24 18:16:35.134884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.134890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.134910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.134919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.134926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:42.232 [2024-07-24 18:16:35.134935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.134941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.134948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:42.232 [2024-07-24 18:16:35.134979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.134987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.232 [2024-07-24 18:16:35.135086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.232 [2024-07-24 18:16:35.135099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2388700 with addr=10.0.0.2, port=4420 00:21:42.232 [2024-07-24 18:16:35.135107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2388700 is same with the state(5) to be set 00:21:42.232 [2024-07-24 18:16:35.135332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.232 [2024-07-24 18:16:35.135342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x237a840 with addr=10.0.0.2, port=4420 00:21:42.232 [2024-07-24 18:16:35.135348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237a840 is same with the state(5) to be set 00:21:42.232 [2024-07-24 18:16:35.135509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:42.232 [2024-07-24 18:16:35.135521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2379ab0 with addr=10.0.0.2, port=4420 00:21:42.232 [2024-07-24 18:16:35.135528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379ab0 is same with the state(5) to be set 00:21:42.232 [2024-07-24 18:16:35.135556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2388700 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.135567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x237a840 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.135576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2379ab0 (9): Bad file descriptor 00:21:42.232 [2024-07-24 18:16:35.135600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.135608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.135615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:42.232 [2024-07-24 18:16:35.135624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:42.232 [2024-07-24 18:16:35.135630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:42.232 [2024-07-24 18:16:35.135637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:42.233 [2024-07-24 18:16:35.135645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:42.233 [2024-07-24 18:16:35.135651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:42.233 [2024-07-24 18:16:35.135657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:42.233 [2024-07-24 18:16:35.135681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.233 [2024-07-24 18:16:35.135688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.233 [2024-07-24 18:16:35.135697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:42.491 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:42.491 18:16:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3469227 00:21:43.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3469227) - No such process 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:43.426 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:43.426 rmmod nvme_tcp 00:21:43.684 rmmod nvme_fabrics 00:21:43.684 rmmod nvme_keyring 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.684 18:16:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.586 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.586 00:21:45.586 real 0m7.479s 00:21:45.586 user 0m17.569s 00:21:45.586 sys 0m1.302s 00:21:45.586 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.586 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 ************************************ 00:21:45.586 END TEST nvmf_shutdown_tc3 00:21:45.586 ************************************ 00:21:45.586 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:45.586 00:21:45.586 real 0m30.766s 00:21:45.586 user 1m15.791s 00:21:45.586 sys 0m8.427s 00:21:45.586 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.586 18:16:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:45.586 ************************************ 00:21:45.586 END TEST nvmf_shutdown 00:21:45.586 ************************************ 00:21:45.845 18:16:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:45.845 00:21:45.845 real 10m37.478s 00:21:45.845 user 23m38.475s 00:21:45.845 sys 3m5.166s 00:21:45.845 18:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:45.845 18:16:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.845 ************************************ 00:21:45.845 END TEST nvmf_target_extra 00:21:45.845 ************************************ 00:21:45.845 18:16:38 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:45.845 18:16:38 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:45.845 18:16:38 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.845 18:16:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:45.845 ************************************ 00:21:45.845 START TEST nvmf_host 00:21:45.845 ************************************ 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:45.845 * Looking for test storage... 00:21:45.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:45.845 ************************************ 00:21:45.845 START TEST nvmf_multicontroller 00:21:45.845 ************************************ 00:21:45.845 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:46.103 * Looking for test storage... 00:21:46.103 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.103 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:46.104 18:16:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:51.364 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:51.364 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:51.364 Found net devices under 0000:86:00.0: cvl_0_0 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:51.364 Found net devices under 0000:86:00.1: cvl_0_1 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.364 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:21:51.365 00:21:51.365 --- 10.0.0.2 ping statistics --- 00:21:51.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.365 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:21:51.365 00:21:51.365 --- 10.0.0.1 ping statistics --- 00:21:51.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.365 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3473300 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3473300 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3473300 ']' 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.365 18:16:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:51.365 [2024-07-24 18:16:44.333707] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:51.365 [2024-07-24 18:16:44.333748] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.365 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.365 [2024-07-24 18:16:44.391747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:51.622 [2024-07-24 18:16:44.470645] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.622 [2024-07-24 18:16:44.470678] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.622 [2024-07-24 18:16:44.470685] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.622 [2024-07-24 18:16:44.470691] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.622 [2024-07-24 18:16:44.470697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.622 [2024-07-24 18:16:44.470732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.622 [2024-07-24 18:16:44.470819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:51.622 [2024-07-24 18:16:44.470820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 [2024-07-24 18:16:45.197616] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 Malloc0 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 [2024-07-24 18:16:45.248753] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.187 [2024-07-24 18:16:45.256680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.187 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.445 Malloc1 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3473498 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3473498 /var/tmp/bdevperf.sock 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 3473498 ']' 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:52.445 18:16:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.376 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:53.376 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:21:53.376 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:53.376 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.376 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.376 NVMe0n1 00:21:53.376 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.377 1 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.377 request: 00:21:53.377 { 00:21:53.377 "name": "NVMe0", 00:21:53.377 "trtype": "tcp", 00:21:53.377 "traddr": "10.0.0.2", 00:21:53.377 "adrfam": "ipv4", 00:21:53.377 "trsvcid": "4420", 00:21:53.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.377 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:53.377 "hostaddr": "10.0.0.2", 00:21:53.377 "hostsvcid": "60000", 00:21:53.377 "prchk_reftag": false, 00:21:53.377 "prchk_guard": false, 00:21:53.377 "hdgst": false, 00:21:53.377 "ddgst": false, 00:21:53.377 "method": "bdev_nvme_attach_controller", 00:21:53.377 "req_id": 1 00:21:53.377 } 00:21:53.377 Got JSON-RPC error response 00:21:53.377 response: 00:21:53.377 { 00:21:53.377 "code": -114, 00:21:53.377 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.377 } 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.377 request: 00:21:53.377 { 00:21:53.377 "name": "NVMe0", 00:21:53.377 "trtype": "tcp", 00:21:53.377 "traddr": "10.0.0.2", 00:21:53.377 "adrfam": "ipv4", 00:21:53.377 "trsvcid": "4420", 00:21:53.377 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:53.377 "hostaddr": "10.0.0.2", 00:21:53.377 "hostsvcid": "60000", 00:21:53.377 "prchk_reftag": false, 00:21:53.377 "prchk_guard": false, 00:21:53.377 "hdgst": false, 00:21:53.377 "ddgst": false, 00:21:53.377 "method": "bdev_nvme_attach_controller", 00:21:53.377 "req_id": 1 00:21:53.377 } 00:21:53.377 Got JSON-RPC error response 00:21:53.377 response: 00:21:53.377 { 00:21:53.377 "code": -114, 00:21:53.377 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.377 } 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.377 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.634 request: 00:21:53.634 { 00:21:53.634 "name": "NVMe0", 00:21:53.634 "trtype": "tcp", 00:21:53.634 "traddr": "10.0.0.2", 00:21:53.634 "adrfam": "ipv4", 00:21:53.634 "trsvcid": "4420", 00:21:53.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.634 "hostaddr": "10.0.0.2", 00:21:53.634 "hostsvcid": "60000", 00:21:53.634 "prchk_reftag": false, 00:21:53.634 "prchk_guard": false, 00:21:53.634 "hdgst": false, 00:21:53.634 "ddgst": false, 00:21:53.634 "multipath": "disable", 00:21:53.634 "method": "bdev_nvme_attach_controller", 00:21:53.634 "req_id": 1 00:21:53.634 } 00:21:53.634 Got JSON-RPC error response 00:21:53.634 response: 00:21:53.634 { 00:21:53.634 "code": -114, 00:21:53.634 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:53.634 } 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:53.634 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.635 request: 00:21:53.635 { 00:21:53.635 "name": "NVMe0", 00:21:53.635 "trtype": "tcp", 00:21:53.635 "traddr": "10.0.0.2", 00:21:53.635 "adrfam": "ipv4", 00:21:53.635 "trsvcid": "4420", 00:21:53.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.635 "hostaddr": "10.0.0.2", 00:21:53.635 "hostsvcid": "60000", 00:21:53.635 "prchk_reftag": false, 00:21:53.635 "prchk_guard": false, 00:21:53.635 "hdgst": false, 00:21:53.635 "ddgst": false, 00:21:53.635 "multipath": "failover", 00:21:53.635 "method": "bdev_nvme_attach_controller", 00:21:53.635 "req_id": 1 00:21:53.635 } 00:21:53.635 Got JSON-RPC error response 00:21:53.635 response: 00:21:53.635 { 00:21:53.635 "code": -114, 00:21:53.635 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:53.635 } 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.635 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.635 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:53.635 18:16:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.005 0 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3473498 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3473498 ']' 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3473498 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3473498 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3473498' 00:21:55.005 killing process with pid 3473498 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3473498 00:21:55.005 18:16:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3473498 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:55.005 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:55.005 [2024-07-24 18:16:45.354721] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:21:55.005 [2024-07-24 18:16:45.354770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473498 ] 00:21:55.005 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.005 [2024-07-24 18:16:45.412026] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.005 [2024-07-24 18:16:45.492033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.005 [2024-07-24 18:16:46.643887] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name cf3afce1-cc2f-48b9-87b0-c65d39c87541 already exists 00:21:55.005 [2024-07-24 18:16:46.643917] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:cf3afce1-cc2f-48b9-87b0-c65d39c87541 alias for bdev NVMe1n1 00:21:55.005 [2024-07-24 18:16:46.643926] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:55.005 Running I/O for 1 seconds... 00:21:55.005 00:21:55.005 Latency(us) 00:21:55.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.005 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:55.005 NVMe0n1 : 1.01 24001.18 93.75 0.00 0.00 5316.01 1497.97 6428.77 00:21:55.005 =================================================================================================================== 00:21:55.005 Total : 24001.18 93.75 0.00 0.00 5316.01 1497.97 6428.77 00:21:55.005 Received shutdown signal, test time was about 1.000000 seconds 00:21:55.005 00:21:55.005 Latency(us) 00:21:55.005 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.005 =================================================================================================================== 00:21:55.005 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.005 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:55.005 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:55.005 rmmod nvme_tcp 00:21:55.005 rmmod nvme_fabrics 00:21:55.263 rmmod nvme_keyring 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3473300 ']' 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3473300 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 3473300 ']' 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 3473300 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3473300 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3473300' 00:21:55.263 killing process with pid 3473300 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 3473300 00:21:55.263 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 3473300 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.522 18:16:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.421 18:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:57.421 00:21:57.421 real 0m11.571s 00:21:57.421 user 0m16.137s 00:21:57.421 sys 0m4.748s 00:21:57.421 18:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:57.421 18:16:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.421 ************************************ 00:21:57.421 END TEST nvmf_multicontroller 00:21:57.421 ************************************ 00:21:57.421 18:16:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.421 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:57.421 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:57.421 18:16:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:57.679 ************************************ 00:21:57.679 START TEST nvmf_aer 00:21:57.679 ************************************ 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:57.679 * Looking for test storage... 00:21:57.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:21:57.679 18:16:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:02.986 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:02.986 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:02.986 Found net devices under 0000:86:00.0: cvl_0_0 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:02.986 Found net devices under 0000:86:00.1: cvl_0_1 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.986 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.986 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.164 ms 00:22:02.986 00:22:02.986 --- 10.0.0.2 ping statistics --- 00:22:02.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.986 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.986 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.986 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:22:02.986 00:22:02.986 --- 10.0.0.1 ping statistics --- 00:22:02.986 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.986 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.986 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3477325 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3477325 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 3477325 ']' 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:02.987 18:16:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:02.987 [2024-07-24 18:16:55.829770] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:22:02.987 [2024-07-24 18:16:55.829813] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.987 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.987 [2024-07-24 18:16:55.886358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:02.987 [2024-07-24 18:16:55.966473] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.987 [2024-07-24 18:16:55.966514] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.987 [2024-07-24 18:16:55.966520] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.987 [2024-07-24 18:16:55.966526] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.987 [2024-07-24 18:16:55.966531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.987 [2024-07-24 18:16:55.966565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.987 [2024-07-24 18:16:55.966664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.987 [2024-07-24 18:16:55.966682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:02.987 [2024-07-24 18:16:55.966683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.919 [2024-07-24 18:16:56.691815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.919 Malloc0 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.919 [2024-07-24 18:16:56.743471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:03.919 [ 00:22:03.919 { 00:22:03.919 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:03.919 "subtype": "Discovery", 00:22:03.919 "listen_addresses": [], 00:22:03.919 "allow_any_host": true, 00:22:03.919 "hosts": [] 00:22:03.919 }, 00:22:03.919 { 00:22:03.919 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.919 "subtype": "NVMe", 00:22:03.919 "listen_addresses": [ 00:22:03.919 { 00:22:03.919 "trtype": "TCP", 00:22:03.919 "adrfam": "IPv4", 00:22:03.919 "traddr": "10.0.0.2", 00:22:03.919 "trsvcid": "4420" 00:22:03.919 } 00:22:03.919 ], 00:22:03.919 "allow_any_host": true, 00:22:03.919 "hosts": [], 00:22:03.919 "serial_number": "SPDK00000000000001", 00:22:03.919 "model_number": "SPDK bdev Controller", 00:22:03.919 "max_namespaces": 2, 00:22:03.919 "min_cntlid": 1, 00:22:03.919 "max_cntlid": 65519, 00:22:03.919 "namespaces": [ 00:22:03.919 { 00:22:03.919 "nsid": 1, 00:22:03.919 "bdev_name": "Malloc0", 00:22:03.919 "name": "Malloc0", 00:22:03.919 "nguid": "BE8C649EDB984A63A2A2CF89EBF28E0B", 00:22:03.919 "uuid": "be8c649e-db98-4a63-a2a2-cf89ebf28e0b" 00:22:03.919 } 00:22:03.919 ] 00:22:03.919 } 00:22:03.919 ] 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3477545 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:03.919 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:22:03.919 18:16:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.177 Malloc1 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.177 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.177 Asynchronous Event Request test 00:22:04.177 Attaching to 10.0.0.2 00:22:04.177 Attached to 10.0.0.2 00:22:04.177 Registering asynchronous event callbacks... 00:22:04.178 Starting namespace attribute notice tests for all controllers... 00:22:04.178 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:04.178 aer_cb - Changed Namespace 00:22:04.178 Cleaning up... 00:22:04.178 [ 00:22:04.178 { 00:22:04.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:04.178 "subtype": "Discovery", 00:22:04.178 "listen_addresses": [], 00:22:04.178 "allow_any_host": true, 00:22:04.178 "hosts": [] 00:22:04.178 }, 00:22:04.178 { 00:22:04.178 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.178 "subtype": "NVMe", 00:22:04.178 "listen_addresses": [ 00:22:04.178 { 00:22:04.178 "trtype": "TCP", 00:22:04.178 "adrfam": "IPv4", 00:22:04.178 "traddr": "10.0.0.2", 00:22:04.178 "trsvcid": "4420" 00:22:04.178 } 00:22:04.178 ], 00:22:04.178 "allow_any_host": true, 00:22:04.178 "hosts": [], 00:22:04.178 "serial_number": "SPDK00000000000001", 00:22:04.178 "model_number": "SPDK bdev Controller", 00:22:04.178 "max_namespaces": 2, 00:22:04.178 "min_cntlid": 1, 00:22:04.178 "max_cntlid": 65519, 00:22:04.178 "namespaces": [ 00:22:04.178 { 00:22:04.178 "nsid": 1, 00:22:04.178 "bdev_name": "Malloc0", 00:22:04.178 "name": "Malloc0", 00:22:04.178 "nguid": "BE8C649EDB984A63A2A2CF89EBF28E0B", 00:22:04.178 "uuid": "be8c649e-db98-4a63-a2a2-cf89ebf28e0b" 00:22:04.178 }, 00:22:04.178 { 00:22:04.178 "nsid": 2, 00:22:04.178 "bdev_name": "Malloc1", 00:22:04.178 "name": "Malloc1", 00:22:04.178 "nguid": "FF1FD09EB3864541BC405E192DACADBA", 00:22:04.178 "uuid": "ff1fd09e-b386-4541-bc40-5e192dacadba" 00:22:04.178 } 00:22:04.178 ] 00:22:04.178 } 00:22:04.178 ] 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3477545 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.178 rmmod nvme_tcp 00:22:04.178 rmmod nvme_fabrics 00:22:04.178 rmmod nvme_keyring 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:04.178 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3477325 ']' 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3477325 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 3477325 ']' 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 3477325 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3477325 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3477325' 00:22:04.436 killing process with pid 3477325 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 3477325 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 3477325 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:04.436 18:16:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:06.967 00:22:06.967 real 0m9.032s 00:22:06.967 user 0m7.495s 00:22:06.967 sys 0m4.328s 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:06.967 ************************************ 00:22:06.967 END TEST nvmf_aer 00:22:06.967 ************************************ 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:06.967 ************************************ 00:22:06.967 START TEST nvmf_async_init 00:22:06.967 ************************************ 00:22:06.967 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:06.967 * Looking for test storage... 00:22:06.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=043e32cef2aa4e30aad5fc67cc24f525 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.968 18:16:59 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.230 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:12.231 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:12.231 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:12.231 Found net devices under 0000:86:00.0: cvl_0_0 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:12.231 Found net devices under 0000:86:00.1: cvl_0_1 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.231 18:17:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:12.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:22:12.231 00:22:12.231 --- 10.0.0.2 ping statistics --- 00:22:12.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.231 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:12.231 00:22:12.231 --- 10.0.0.1 ping statistics --- 00:22:12.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.231 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3480915 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3480915 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 3480915 ']' 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:12.231 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:12.231 [2024-07-24 18:17:05.104129] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:22:12.232 [2024-07-24 18:17:05.104170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.232 EAL: No free 2048 kB hugepages reported on node 1 00:22:12.232 [2024-07-24 18:17:05.161833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.232 [2024-07-24 18:17:05.239917] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.232 [2024-07-24 18:17:05.239953] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.232 [2024-07-24 18:17:05.239960] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.232 [2024-07-24 18:17:05.239966] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.232 [2024-07-24 18:17:05.239971] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.232 [2024-07-24 18:17:05.239992] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.160 [2024-07-24 18:17:05.938112] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.160 null0 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 043e32cef2aa4e30aad5fc67cc24f525 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.160 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.161 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:13.161 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.161 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.161 [2024-07-24 18:17:05.982313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.161 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.161 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:13.161 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.161 18:17:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.161 nvme0n1 00:22:13.161 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.161 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.161 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.161 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.161 [ 00:22:13.161 { 00:22:13.161 "name": "nvme0n1", 00:22:13.161 "aliases": [ 00:22:13.161 "043e32ce-f2aa-4e30-aad5-fc67cc24f525" 00:22:13.161 ], 00:22:13.161 "product_name": "NVMe disk", 00:22:13.161 "block_size": 512, 00:22:13.161 "num_blocks": 2097152, 00:22:13.161 "uuid": "043e32ce-f2aa-4e30-aad5-fc67cc24f525", 00:22:13.161 "assigned_rate_limits": { 00:22:13.161 "rw_ios_per_sec": 0, 00:22:13.161 "rw_mbytes_per_sec": 0, 00:22:13.161 "r_mbytes_per_sec": 0, 00:22:13.161 "w_mbytes_per_sec": 0 00:22:13.161 }, 00:22:13.161 "claimed": false, 00:22:13.161 "zoned": false, 00:22:13.161 "supported_io_types": { 00:22:13.161 "read": true, 00:22:13.161 "write": true, 00:22:13.161 "unmap": false, 00:22:13.161 "flush": true, 00:22:13.161 "reset": true, 00:22:13.161 "nvme_admin": true, 00:22:13.161 "nvme_io": true, 00:22:13.161 "nvme_io_md": false, 00:22:13.161 "write_zeroes": true, 00:22:13.161 "zcopy": false, 00:22:13.161 "get_zone_info": false, 00:22:13.161 "zone_management": false, 00:22:13.161 "zone_append": false, 00:22:13.161 "compare": true, 00:22:13.161 "compare_and_write": true, 00:22:13.161 "abort": true, 00:22:13.161 "seek_hole": false, 00:22:13.161 "seek_data": false, 00:22:13.161 "copy": true, 00:22:13.161 "nvme_iov_md": false 00:22:13.161 }, 00:22:13.161 "memory_domains": [ 00:22:13.161 { 00:22:13.161 "dma_device_id": "system", 00:22:13.161 "dma_device_type": 1 00:22:13.161 } 00:22:13.161 ], 00:22:13.161 "driver_specific": { 00:22:13.161 "nvme": [ 00:22:13.161 { 00:22:13.161 "trid": { 00:22:13.161 "trtype": "TCP", 00:22:13.161 "adrfam": "IPv4", 00:22:13.161 "traddr": "10.0.0.2", 00:22:13.161 "trsvcid": "4420", 00:22:13.161 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.161 }, 00:22:13.161 "ctrlr_data": { 00:22:13.161 "cntlid": 1, 00:22:13.161 "vendor_id": "0x8086", 00:22:13.161 "model_number": "SPDK bdev Controller", 00:22:13.161 "serial_number": "00000000000000000000", 00:22:13.161 "firmware_revision": "24.09", 00:22:13.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.161 "oacs": { 00:22:13.161 "security": 0, 00:22:13.161 "format": 0, 00:22:13.161 "firmware": 0, 00:22:13.161 "ns_manage": 0 00:22:13.161 }, 00:22:13.161 "multi_ctrlr": true, 00:22:13.161 "ana_reporting": false 00:22:13.161 }, 00:22:13.161 "vs": { 00:22:13.161 "nvme_version": "1.3" 00:22:13.161 }, 00:22:13.161 "ns_data": { 00:22:13.161 "id": 1, 00:22:13.161 "can_share": true 00:22:13.161 } 00:22:13.161 } 00:22:13.161 ], 00:22:13.419 "mp_policy": "active_passive" 00:22:13.419 } 00:22:13.419 } 00:22:13.419 ] 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.419 [2024-07-24 18:17:06.247809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:13.419 [2024-07-24 18:17:06.247884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa96390 (9): Bad file descriptor 00:22:13.419 [2024-07-24 18:17:06.379568] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.419 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.419 [ 00:22:13.419 { 00:22:13.419 "name": "nvme0n1", 00:22:13.419 "aliases": [ 00:22:13.419 "043e32ce-f2aa-4e30-aad5-fc67cc24f525" 00:22:13.419 ], 00:22:13.419 "product_name": "NVMe disk", 00:22:13.419 "block_size": 512, 00:22:13.419 "num_blocks": 2097152, 00:22:13.419 "uuid": "043e32ce-f2aa-4e30-aad5-fc67cc24f525", 00:22:13.419 "assigned_rate_limits": { 00:22:13.419 "rw_ios_per_sec": 0, 00:22:13.419 "rw_mbytes_per_sec": 0, 00:22:13.419 "r_mbytes_per_sec": 0, 00:22:13.419 "w_mbytes_per_sec": 0 00:22:13.419 }, 00:22:13.419 "claimed": false, 00:22:13.419 "zoned": false, 00:22:13.419 "supported_io_types": { 00:22:13.419 "read": true, 00:22:13.419 "write": true, 00:22:13.419 "unmap": false, 00:22:13.419 "flush": true, 00:22:13.419 "reset": true, 00:22:13.419 "nvme_admin": true, 00:22:13.419 "nvme_io": true, 00:22:13.419 "nvme_io_md": false, 00:22:13.419 "write_zeroes": true, 00:22:13.419 "zcopy": false, 00:22:13.419 "get_zone_info": false, 00:22:13.419 "zone_management": false, 00:22:13.419 "zone_append": false, 00:22:13.419 "compare": true, 00:22:13.420 "compare_and_write": true, 00:22:13.420 "abort": true, 00:22:13.420 "seek_hole": false, 00:22:13.420 "seek_data": false, 00:22:13.420 "copy": true, 00:22:13.420 "nvme_iov_md": false 00:22:13.420 }, 00:22:13.420 "memory_domains": [ 00:22:13.420 { 00:22:13.420 "dma_device_id": "system", 00:22:13.420 "dma_device_type": 1 00:22:13.420 } 00:22:13.420 ], 00:22:13.420 "driver_specific": { 00:22:13.420 "nvme": [ 00:22:13.420 { 00:22:13.420 "trid": { 00:22:13.420 "trtype": "TCP", 00:22:13.420 "adrfam": "IPv4", 00:22:13.420 "traddr": "10.0.0.2", 00:22:13.420 "trsvcid": "4420", 00:22:13.420 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.420 }, 00:22:13.420 "ctrlr_data": { 00:22:13.420 "cntlid": 2, 00:22:13.420 "vendor_id": "0x8086", 00:22:13.420 "model_number": "SPDK bdev Controller", 00:22:13.420 "serial_number": "00000000000000000000", 00:22:13.420 "firmware_revision": "24.09", 00:22:13.420 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.420 "oacs": { 00:22:13.420 "security": 0, 00:22:13.420 "format": 0, 00:22:13.420 "firmware": 0, 00:22:13.420 "ns_manage": 0 00:22:13.420 }, 00:22:13.420 "multi_ctrlr": true, 00:22:13.420 "ana_reporting": false 00:22:13.420 }, 00:22:13.420 "vs": { 00:22:13.420 "nvme_version": "1.3" 00:22:13.420 }, 00:22:13.420 "ns_data": { 00:22:13.420 "id": 1, 00:22:13.420 "can_share": true 00:22:13.420 } 00:22:13.420 } 00:22:13.420 ], 00:22:13.420 "mp_policy": "active_passive" 00:22:13.420 } 00:22:13.420 } 00:22:13.420 ] 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.0OSzkDgkts 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.0OSzkDgkts 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.420 [2024-07-24 18:17:06.440369] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:13.420 [2024-07-24 18:17:06.440465] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0OSzkDgkts 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.420 [2024-07-24 18:17:06.448381] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.0OSzkDgkts 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.420 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.420 [2024-07-24 18:17:06.460429] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:13.420 [2024-07-24 18:17:06.460461] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:13.678 nvme0n1 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.678 [ 00:22:13.678 { 00:22:13.678 "name": "nvme0n1", 00:22:13.678 "aliases": [ 00:22:13.678 "043e32ce-f2aa-4e30-aad5-fc67cc24f525" 00:22:13.678 ], 00:22:13.678 "product_name": "NVMe disk", 00:22:13.678 "block_size": 512, 00:22:13.678 "num_blocks": 2097152, 00:22:13.678 "uuid": "043e32ce-f2aa-4e30-aad5-fc67cc24f525", 00:22:13.678 "assigned_rate_limits": { 00:22:13.678 "rw_ios_per_sec": 0, 00:22:13.678 "rw_mbytes_per_sec": 0, 00:22:13.678 "r_mbytes_per_sec": 0, 00:22:13.678 "w_mbytes_per_sec": 0 00:22:13.678 }, 00:22:13.678 "claimed": false, 00:22:13.678 "zoned": false, 00:22:13.678 "supported_io_types": { 00:22:13.678 "read": true, 00:22:13.678 "write": true, 00:22:13.678 "unmap": false, 00:22:13.678 "flush": true, 00:22:13.678 "reset": true, 00:22:13.678 "nvme_admin": true, 00:22:13.678 "nvme_io": true, 00:22:13.678 "nvme_io_md": false, 00:22:13.678 "write_zeroes": true, 00:22:13.678 "zcopy": false, 00:22:13.678 "get_zone_info": false, 00:22:13.678 "zone_management": false, 00:22:13.678 "zone_append": false, 00:22:13.678 "compare": true, 00:22:13.678 "compare_and_write": true, 00:22:13.678 "abort": true, 00:22:13.678 "seek_hole": false, 00:22:13.678 "seek_data": false, 00:22:13.678 "copy": true, 00:22:13.678 "nvme_iov_md": false 00:22:13.678 }, 00:22:13.678 "memory_domains": [ 00:22:13.678 { 00:22:13.678 "dma_device_id": "system", 00:22:13.678 "dma_device_type": 1 00:22:13.678 } 00:22:13.678 ], 00:22:13.678 "driver_specific": { 00:22:13.678 "nvme": [ 00:22:13.678 { 00:22:13.678 "trid": { 00:22:13.678 "trtype": "TCP", 00:22:13.678 "adrfam": "IPv4", 00:22:13.678 "traddr": "10.0.0.2", 00:22:13.678 "trsvcid": "4421", 00:22:13.678 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:13.678 }, 00:22:13.678 "ctrlr_data": { 00:22:13.678 "cntlid": 3, 00:22:13.678 "vendor_id": "0x8086", 00:22:13.678 "model_number": "SPDK bdev Controller", 00:22:13.678 "serial_number": "00000000000000000000", 00:22:13.678 "firmware_revision": "24.09", 00:22:13.678 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:13.678 "oacs": { 00:22:13.678 "security": 0, 00:22:13.678 "format": 0, 00:22:13.678 "firmware": 0, 00:22:13.678 "ns_manage": 0 00:22:13.678 }, 00:22:13.678 "multi_ctrlr": true, 00:22:13.678 "ana_reporting": false 00:22:13.678 }, 00:22:13.678 "vs": { 00:22:13.678 "nvme_version": "1.3" 00:22:13.678 }, 00:22:13.678 "ns_data": { 00:22:13.678 "id": 1, 00:22:13.678 "can_share": true 00:22:13.678 } 00:22:13.678 } 00:22:13.678 ], 00:22:13.678 "mp_policy": "active_passive" 00:22:13.678 } 00:22:13.678 } 00:22:13.678 ] 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.0OSzkDgkts 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:13.678 rmmod nvme_tcp 00:22:13.678 rmmod nvme_fabrics 00:22:13.678 rmmod nvme_keyring 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3480915 ']' 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3480915 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 3480915 ']' 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 3480915 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3480915 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3480915' 00:22:13.678 killing process with pid 3480915 00:22:13.678 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 3480915 00:22:13.678 [2024-07-24 18:17:06.678522] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:13.679 [2024-07-24 18:17:06.678544] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:13.679 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 3480915 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:13.936 18:17:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:15.835 18:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:15.835 00:22:15.835 real 0m9.292s 00:22:15.835 user 0m3.473s 00:22:15.835 sys 0m4.341s 00:22:15.835 18:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:15.835 18:17:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:15.835 ************************************ 00:22:15.835 END TEST nvmf_async_init 00:22:15.835 ************************************ 00:22:16.093 18:17:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.093 18:17:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:16.093 18:17:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.093 18:17:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.093 ************************************ 00:22:16.093 START TEST dma 00:22:16.093 ************************************ 00:22:16.093 18:17:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:16.093 * Looking for test storage... 00:22:16.093 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.093 18:17:09 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:16.094 00:22:16.094 real 0m0.109s 00:22:16.094 user 0m0.054s 00:22:16.094 sys 0m0.063s 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:16.094 ************************************ 00:22:16.094 END TEST dma 00:22:16.094 ************************************ 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:16.094 ************************************ 00:22:16.094 START TEST nvmf_identify 00:22:16.094 ************************************ 00:22:16.094 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:16.352 * Looking for test storage... 00:22:16.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:16.352 18:17:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:21.610 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:21.610 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.610 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:21.610 Found net devices under 0000:86:00.0: cvl_0_0 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:21.611 Found net devices under 0000:86:00.1: cvl_0_1 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:21.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:21.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:22:21.611 00:22:21.611 --- 10.0.0.2 ping statistics --- 00:22:21.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.611 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:21.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:21.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:22:21.611 00:22:21.611 --- 10.0.0.1 ping statistics --- 00:22:21.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:21.611 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3484683 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3484683 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 3484683 ']' 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:21.611 18:17:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:21.611 [2024-07-24 18:17:14.645725] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:22:21.611 [2024-07-24 18:17:14.645768] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:21.611 EAL: No free 2048 kB hugepages reported on node 1 00:22:21.868 [2024-07-24 18:17:14.703122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:21.868 [2024-07-24 18:17:14.782666] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:21.868 [2024-07-24 18:17:14.782703] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:21.868 [2024-07-24 18:17:14.782710] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:21.868 [2024-07-24 18:17:14.782716] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:21.868 [2024-07-24 18:17:14.782721] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:21.868 [2024-07-24 18:17:14.782780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.868 [2024-07-24 18:17:14.782876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:21.868 [2024-07-24 18:17:14.782962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:21.868 [2024-07-24 18:17:14.782964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.432 [2024-07-24 18:17:15.469861] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.432 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.693 Malloc0 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.693 [2024-07-24 18:17:15.549866] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.693 [ 00:22:22.693 { 00:22:22.693 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:22.693 "subtype": "Discovery", 00:22:22.693 "listen_addresses": [ 00:22:22.693 { 00:22:22.693 "trtype": "TCP", 00:22:22.693 "adrfam": "IPv4", 00:22:22.693 "traddr": "10.0.0.2", 00:22:22.693 "trsvcid": "4420" 00:22:22.693 } 00:22:22.693 ], 00:22:22.693 "allow_any_host": true, 00:22:22.693 "hosts": [] 00:22:22.693 }, 00:22:22.693 { 00:22:22.693 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.693 "subtype": "NVMe", 00:22:22.693 "listen_addresses": [ 00:22:22.693 { 00:22:22.693 "trtype": "TCP", 00:22:22.693 "adrfam": "IPv4", 00:22:22.693 "traddr": "10.0.0.2", 00:22:22.693 "trsvcid": "4420" 00:22:22.693 } 00:22:22.693 ], 00:22:22.693 "allow_any_host": true, 00:22:22.693 "hosts": [], 00:22:22.693 "serial_number": "SPDK00000000000001", 00:22:22.693 "model_number": "SPDK bdev Controller", 00:22:22.693 "max_namespaces": 32, 00:22:22.693 "min_cntlid": 1, 00:22:22.693 "max_cntlid": 65519, 00:22:22.693 "namespaces": [ 00:22:22.693 { 00:22:22.693 "nsid": 1, 00:22:22.693 "bdev_name": "Malloc0", 00:22:22.693 "name": "Malloc0", 00:22:22.693 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:22.693 "eui64": "ABCDEF0123456789", 00:22:22.693 "uuid": "d9e616b7-28c7-4761-89d6-48253af49446" 00:22:22.693 } 00:22:22.693 ] 00:22:22.693 } 00:22:22.693 ] 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.693 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:22.693 [2024-07-24 18:17:15.597855] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:22:22.693 [2024-07-24 18:17:15.597888] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484930 ] 00:22:22.693 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.693 [2024-07-24 18:17:15.625822] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:22.693 [2024-07-24 18:17:15.625956] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:22.693 [2024-07-24 18:17:15.625961] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:22.694 [2024-07-24 18:17:15.625971] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:22.694 [2024-07-24 18:17:15.625979] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:22.694 [2024-07-24 18:17:15.626314] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:22.694 [2024-07-24 18:17:15.626340] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x142cec0 0 00:22:22.694 [2024-07-24 18:17:15.640495] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:22.694 [2024-07-24 18:17:15.640511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:22.694 [2024-07-24 18:17:15.640515] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:22.694 [2024-07-24 18:17:15.640518] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:22.694 [2024-07-24 18:17:15.640555] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.640560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.640563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.640575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:22.694 [2024-07-24 18:17:15.640590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.648500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.694 [2024-07-24 18:17:15.648509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.694 [2024-07-24 18:17:15.648512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.694 [2024-07-24 18:17:15.648527] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:22.694 [2024-07-24 18:17:15.648533] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:22.694 [2024-07-24 18:17:15.648537] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:22.694 [2024-07-24 18:17:15.648550] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648556] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.648563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.694 [2024-07-24 18:17:15.648575] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.648741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.694 [2024-07-24 18:17:15.648747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.694 [2024-07-24 18:17:15.648750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.694 [2024-07-24 18:17:15.648760] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:22.694 [2024-07-24 18:17:15.648770] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:22.694 [2024-07-24 18:17:15.648776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.648788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.694 [2024-07-24 18:17:15.648798] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.648871] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.694 [2024-07-24 18:17:15.648876] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.694 [2024-07-24 18:17:15.648879] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.694 [2024-07-24 18:17:15.648887] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:22.694 [2024-07-24 18:17:15.648894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:22.694 [2024-07-24 18:17:15.648900] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648903] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648906] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.648911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.694 [2024-07-24 18:17:15.648921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.648986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.694 [2024-07-24 18:17:15.648991] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.694 [2024-07-24 18:17:15.648994] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.648998] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.694 [2024-07-24 18:17:15.649002] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:22.694 [2024-07-24 18:17:15.649009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.649021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.694 [2024-07-24 18:17:15.649030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.649104] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.694 [2024-07-24 18:17:15.649109] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.694 [2024-07-24 18:17:15.649112] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.694 [2024-07-24 18:17:15.649119] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:22.694 [2024-07-24 18:17:15.649123] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:22.694 [2024-07-24 18:17:15.649129] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:22.694 [2024-07-24 18:17:15.649236] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:22.694 [2024-07-24 18:17:15.649240] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:22.694 [2024-07-24 18:17:15.649248] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.649259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.694 [2024-07-24 18:17:15.649269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.649344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.694 [2024-07-24 18:17:15.649349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.694 [2024-07-24 18:17:15.649352] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649355] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.694 [2024-07-24 18:17:15.649359] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:22.694 [2024-07-24 18:17:15.649367] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649371] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649374] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.649379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.694 [2024-07-24 18:17:15.649388] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.649448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.694 [2024-07-24 18:17:15.649454] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.694 [2024-07-24 18:17:15.649457] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.694 [2024-07-24 18:17:15.649463] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:22.694 [2024-07-24 18:17:15.649467] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:22.694 [2024-07-24 18:17:15.649474] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:22.694 [2024-07-24 18:17:15.649481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:22.694 [2024-07-24 18:17:15.649489] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649499] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.694 [2024-07-24 18:17:15.649504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.694 [2024-07-24 18:17:15.649514] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.694 [2024-07-24 18:17:15.649606] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.694 [2024-07-24 18:17:15.649612] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.694 [2024-07-24 18:17:15.649617] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.694 [2024-07-24 18:17:15.649620] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142cec0): datao=0, datal=4096, cccid=0 00:22:22.695 [2024-07-24 18:17:15.649624] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14afe40) on tqpair(0x142cec0): expected_datao=0, payload_size=4096 00:22:22.695 [2024-07-24 18:17:15.649628] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649634] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649638] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.695 [2024-07-24 18:17:15.649673] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.695 [2024-07-24 18:17:15.649676] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649679] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.695 [2024-07-24 18:17:15.649685] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:22.695 [2024-07-24 18:17:15.649689] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:22.695 [2024-07-24 18:17:15.649693] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:22.695 [2024-07-24 18:17:15.649697] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:22.695 [2024-07-24 18:17:15.649701] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:22.695 [2024-07-24 18:17:15.649705] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:22.695 [2024-07-24 18:17:15.649712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:22.695 [2024-07-24 18:17:15.649721] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649724] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649727] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.649732] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.695 [2024-07-24 18:17:15.649742] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.695 [2024-07-24 18:17:15.649813] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.695 [2024-07-24 18:17:15.649818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.695 [2024-07-24 18:17:15.649821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.695 [2024-07-24 18:17:15.649831] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649837] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.649842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.695 [2024-07-24 18:17:15.649847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649853] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.649857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.695 [2024-07-24 18:17:15.649864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.649875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.695 [2024-07-24 18:17:15.649880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649883] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649886] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.649890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.695 [2024-07-24 18:17:15.649894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:22.695 [2024-07-24 18:17:15.649904] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:22.695 [2024-07-24 18:17:15.649909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.649912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.649918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.695 [2024-07-24 18:17:15.649928] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14afe40, cid 0, qid 0 00:22:22.695 [2024-07-24 18:17:15.649932] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14affc0, cid 1, qid 0 00:22:22.695 [2024-07-24 18:17:15.649936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b0140, cid 2, qid 0 00:22:22.695 [2024-07-24 18:17:15.649940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.695 [2024-07-24 18:17:15.649944] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b0440, cid 4, qid 0 00:22:22.695 [2024-07-24 18:17:15.650050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.695 [2024-07-24 18:17:15.650055] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.695 [2024-07-24 18:17:15.650058] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650061] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b0440) on tqpair=0x142cec0 00:22:22.695 [2024-07-24 18:17:15.650066] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:22.695 [2024-07-24 18:17:15.650070] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:22.695 [2024-07-24 18:17:15.650079] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.650088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.695 [2024-07-24 18:17:15.650097] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b0440, cid 4, qid 0 00:22:22.695 [2024-07-24 18:17:15.650171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.695 [2024-07-24 18:17:15.650177] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.695 [2024-07-24 18:17:15.650180] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650183] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142cec0): datao=0, datal=4096, cccid=4 00:22:22.695 [2024-07-24 18:17:15.650187] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b0440) on tqpair(0x142cec0): expected_datao=0, payload_size=4096 00:22:22.695 [2024-07-24 18:17:15.650192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650197] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650201] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.695 [2024-07-24 18:17:15.650228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.695 [2024-07-24 18:17:15.650231] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650234] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b0440) on tqpair=0x142cec0 00:22:22.695 [2024-07-24 18:17:15.650246] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:22.695 [2024-07-24 18:17:15.650268] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650272] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.650277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.695 [2024-07-24 18:17:15.650283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650286] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650289] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x142cec0) 00:22:22.695 [2024-07-24 18:17:15.650294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.695 [2024-07-24 18:17:15.650307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b0440, cid 4, qid 0 00:22:22.695 [2024-07-24 18:17:15.650311] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b05c0, cid 5, qid 0 00:22:22.695 [2024-07-24 18:17:15.650405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.695 [2024-07-24 18:17:15.650410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.695 [2024-07-24 18:17:15.650413] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650416] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142cec0): datao=0, datal=1024, cccid=4 00:22:22.695 [2024-07-24 18:17:15.650420] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b0440) on tqpair(0x142cec0): expected_datao=0, payload_size=1024 00:22:22.695 [2024-07-24 18:17:15.650423] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650428] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650431] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.695 [2024-07-24 18:17:15.650441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.695 [2024-07-24 18:17:15.650443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.650447] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b05c0) on tqpair=0x142cec0 00:22:22.695 [2024-07-24 18:17:15.690630] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.695 [2024-07-24 18:17:15.690642] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.695 [2024-07-24 18:17:15.690645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.690648] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b0440) on tqpair=0x142cec0 00:22:22.695 [2024-07-24 18:17:15.690662] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.695 [2024-07-24 18:17:15.690666] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142cec0) 00:22:22.696 [2024-07-24 18:17:15.690673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.696 [2024-07-24 18:17:15.690692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b0440, cid 4, qid 0 00:22:22.696 [2024-07-24 18:17:15.690774] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.696 [2024-07-24 18:17:15.690779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.696 [2024-07-24 18:17:15.690782] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690786] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142cec0): datao=0, datal=3072, cccid=4 00:22:22.696 [2024-07-24 18:17:15.690789] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b0440) on tqpair(0x142cec0): expected_datao=0, payload_size=3072 00:22:22.696 [2024-07-24 18:17:15.690793] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690799] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690802] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690815] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.696 [2024-07-24 18:17:15.690820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.696 [2024-07-24 18:17:15.690823] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b0440) on tqpair=0x142cec0 00:22:22.696 [2024-07-24 18:17:15.690833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690836] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x142cec0) 00:22:22.696 [2024-07-24 18:17:15.690842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.696 [2024-07-24 18:17:15.690854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b0440, cid 4, qid 0 00:22:22.696 [2024-07-24 18:17:15.690933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.696 [2024-07-24 18:17:15.690938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.696 [2024-07-24 18:17:15.690941] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690944] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x142cec0): datao=0, datal=8, cccid=4 00:22:22.696 [2024-07-24 18:17:15.690948] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14b0440) on tqpair(0x142cec0): expected_datao=0, payload_size=8 00:22:22.696 [2024-07-24 18:17:15.690951] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690957] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.690960] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.731632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.696 [2024-07-24 18:17:15.731643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.696 [2024-07-24 18:17:15.731646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.696 [2024-07-24 18:17:15.731650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b0440) on tqpair=0x142cec0 00:22:22.696 ===================================================== 00:22:22.696 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:22.696 ===================================================== 00:22:22.696 Controller Capabilities/Features 00:22:22.696 ================================ 00:22:22.696 Vendor ID: 0000 00:22:22.696 Subsystem Vendor ID: 0000 00:22:22.696 Serial Number: .................... 00:22:22.696 Model Number: ........................................ 00:22:22.696 Firmware Version: 24.09 00:22:22.696 Recommended Arb Burst: 0 00:22:22.696 IEEE OUI Identifier: 00 00 00 00:22:22.696 Multi-path I/O 00:22:22.696 May have multiple subsystem ports: No 00:22:22.696 May have multiple controllers: No 00:22:22.696 Associated with SR-IOV VF: No 00:22:22.696 Max Data Transfer Size: 131072 00:22:22.696 Max Number of Namespaces: 0 00:22:22.696 Max Number of I/O Queues: 1024 00:22:22.696 NVMe Specification Version (VS): 1.3 00:22:22.696 NVMe Specification Version (Identify): 1.3 00:22:22.696 Maximum Queue Entries: 128 00:22:22.696 Contiguous Queues Required: Yes 00:22:22.696 Arbitration Mechanisms Supported 00:22:22.696 Weighted Round Robin: Not Supported 00:22:22.696 Vendor Specific: Not Supported 00:22:22.696 Reset Timeout: 15000 ms 00:22:22.696 Doorbell Stride: 4 bytes 00:22:22.696 NVM Subsystem Reset: Not Supported 00:22:22.696 Command Sets Supported 00:22:22.696 NVM Command Set: Supported 00:22:22.696 Boot Partition: Not Supported 00:22:22.696 Memory Page Size Minimum: 4096 bytes 00:22:22.696 Memory Page Size Maximum: 4096 bytes 00:22:22.696 Persistent Memory Region: Not Supported 00:22:22.696 Optional Asynchronous Events Supported 00:22:22.696 Namespace Attribute Notices: Not Supported 00:22:22.696 Firmware Activation Notices: Not Supported 00:22:22.696 ANA Change Notices: Not Supported 00:22:22.696 PLE Aggregate Log Change Notices: Not Supported 00:22:22.696 LBA Status Info Alert Notices: Not Supported 00:22:22.696 EGE Aggregate Log Change Notices: Not Supported 00:22:22.696 Normal NVM Subsystem Shutdown event: Not Supported 00:22:22.696 Zone Descriptor Change Notices: Not Supported 00:22:22.696 Discovery Log Change Notices: Supported 00:22:22.696 Controller Attributes 00:22:22.696 128-bit Host Identifier: Not Supported 00:22:22.696 Non-Operational Permissive Mode: Not Supported 00:22:22.696 NVM Sets: Not Supported 00:22:22.696 Read Recovery Levels: Not Supported 00:22:22.696 Endurance Groups: Not Supported 00:22:22.696 Predictable Latency Mode: Not Supported 00:22:22.696 Traffic Based Keep ALive: Not Supported 00:22:22.696 Namespace Granularity: Not Supported 00:22:22.696 SQ Associations: Not Supported 00:22:22.696 UUID List: Not Supported 00:22:22.696 Multi-Domain Subsystem: Not Supported 00:22:22.696 Fixed Capacity Management: Not Supported 00:22:22.696 Variable Capacity Management: Not Supported 00:22:22.696 Delete Endurance Group: Not Supported 00:22:22.696 Delete NVM Set: Not Supported 00:22:22.696 Extended LBA Formats Supported: Not Supported 00:22:22.696 Flexible Data Placement Supported: Not Supported 00:22:22.696 00:22:22.696 Controller Memory Buffer Support 00:22:22.696 ================================ 00:22:22.696 Supported: No 00:22:22.696 00:22:22.696 Persistent Memory Region Support 00:22:22.696 ================================ 00:22:22.696 Supported: No 00:22:22.696 00:22:22.696 Admin Command Set Attributes 00:22:22.696 ============================ 00:22:22.696 Security Send/Receive: Not Supported 00:22:22.696 Format NVM: Not Supported 00:22:22.696 Firmware Activate/Download: Not Supported 00:22:22.696 Namespace Management: Not Supported 00:22:22.696 Device Self-Test: Not Supported 00:22:22.696 Directives: Not Supported 00:22:22.696 NVMe-MI: Not Supported 00:22:22.696 Virtualization Management: Not Supported 00:22:22.696 Doorbell Buffer Config: Not Supported 00:22:22.696 Get LBA Status Capability: Not Supported 00:22:22.696 Command & Feature Lockdown Capability: Not Supported 00:22:22.696 Abort Command Limit: 1 00:22:22.696 Async Event Request Limit: 4 00:22:22.696 Number of Firmware Slots: N/A 00:22:22.696 Firmware Slot 1 Read-Only: N/A 00:22:22.696 Firmware Activation Without Reset: N/A 00:22:22.696 Multiple Update Detection Support: N/A 00:22:22.696 Firmware Update Granularity: No Information Provided 00:22:22.696 Per-Namespace SMART Log: No 00:22:22.696 Asymmetric Namespace Access Log Page: Not Supported 00:22:22.696 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:22.696 Command Effects Log Page: Not Supported 00:22:22.696 Get Log Page Extended Data: Supported 00:22:22.696 Telemetry Log Pages: Not Supported 00:22:22.696 Persistent Event Log Pages: Not Supported 00:22:22.696 Supported Log Pages Log Page: May Support 00:22:22.696 Commands Supported & Effects Log Page: Not Supported 00:22:22.696 Feature Identifiers & Effects Log Page:May Support 00:22:22.696 NVMe-MI Commands & Effects Log Page: May Support 00:22:22.696 Data Area 4 for Telemetry Log: Not Supported 00:22:22.696 Error Log Page Entries Supported: 128 00:22:22.696 Keep Alive: Not Supported 00:22:22.696 00:22:22.696 NVM Command Set Attributes 00:22:22.696 ========================== 00:22:22.696 Submission Queue Entry Size 00:22:22.696 Max: 1 00:22:22.696 Min: 1 00:22:22.696 Completion Queue Entry Size 00:22:22.696 Max: 1 00:22:22.696 Min: 1 00:22:22.696 Number of Namespaces: 0 00:22:22.696 Compare Command: Not Supported 00:22:22.696 Write Uncorrectable Command: Not Supported 00:22:22.696 Dataset Management Command: Not Supported 00:22:22.696 Write Zeroes Command: Not Supported 00:22:22.696 Set Features Save Field: Not Supported 00:22:22.696 Reservations: Not Supported 00:22:22.696 Timestamp: Not Supported 00:22:22.696 Copy: Not Supported 00:22:22.696 Volatile Write Cache: Not Present 00:22:22.696 Atomic Write Unit (Normal): 1 00:22:22.696 Atomic Write Unit (PFail): 1 00:22:22.696 Atomic Compare & Write Unit: 1 00:22:22.696 Fused Compare & Write: Supported 00:22:22.696 Scatter-Gather List 00:22:22.696 SGL Command Set: Supported 00:22:22.696 SGL Keyed: Supported 00:22:22.696 SGL Bit Bucket Descriptor: Not Supported 00:22:22.696 SGL Metadata Pointer: Not Supported 00:22:22.697 Oversized SGL: Not Supported 00:22:22.697 SGL Metadata Address: Not Supported 00:22:22.697 SGL Offset: Supported 00:22:22.697 Transport SGL Data Block: Not Supported 00:22:22.697 Replay Protected Memory Block: Not Supported 00:22:22.697 00:22:22.697 Firmware Slot Information 00:22:22.697 ========================= 00:22:22.697 Active slot: 0 00:22:22.697 00:22:22.697 00:22:22.697 Error Log 00:22:22.697 ========= 00:22:22.697 00:22:22.697 Active Namespaces 00:22:22.697 ================= 00:22:22.697 Discovery Log Page 00:22:22.697 ================== 00:22:22.697 Generation Counter: 2 00:22:22.697 Number of Records: 2 00:22:22.697 Record Format: 0 00:22:22.697 00:22:22.697 Discovery Log Entry 0 00:22:22.697 ---------------------- 00:22:22.697 Transport Type: 3 (TCP) 00:22:22.697 Address Family: 1 (IPv4) 00:22:22.697 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:22.697 Entry Flags: 00:22:22.697 Duplicate Returned Information: 1 00:22:22.697 Explicit Persistent Connection Support for Discovery: 1 00:22:22.697 Transport Requirements: 00:22:22.697 Secure Channel: Not Required 00:22:22.697 Port ID: 0 (0x0000) 00:22:22.697 Controller ID: 65535 (0xffff) 00:22:22.697 Admin Max SQ Size: 128 00:22:22.697 Transport Service Identifier: 4420 00:22:22.697 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:22.697 Transport Address: 10.0.0.2 00:22:22.697 Discovery Log Entry 1 00:22:22.697 ---------------------- 00:22:22.697 Transport Type: 3 (TCP) 00:22:22.697 Address Family: 1 (IPv4) 00:22:22.697 Subsystem Type: 2 (NVM Subsystem) 00:22:22.697 Entry Flags: 00:22:22.697 Duplicate Returned Information: 0 00:22:22.697 Explicit Persistent Connection Support for Discovery: 0 00:22:22.697 Transport Requirements: 00:22:22.697 Secure Channel: Not Required 00:22:22.697 Port ID: 0 (0x0000) 00:22:22.697 Controller ID: 65535 (0xffff) 00:22:22.697 Admin Max SQ Size: 128 00:22:22.697 Transport Service Identifier: 4420 00:22:22.697 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:22.697 Transport Address: 10.0.0.2 [2024-07-24 18:17:15.731724] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:22.697 [2024-07-24 18:17:15.731734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14afe40) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.731740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.697 [2024-07-24 18:17:15.731745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14affc0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.731749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.697 [2024-07-24 18:17:15.731753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b0140) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.731757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.697 [2024-07-24 18:17:15.731763] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.731767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.697 [2024-07-24 18:17:15.731777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.731780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.731783] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.697 [2024-07-24 18:17:15.731790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.697 [2024-07-24 18:17:15.731803] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.697 [2024-07-24 18:17:15.735500] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.697 [2024-07-24 18:17:15.735508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.697 [2024-07-24 18:17:15.735511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.735521] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735527] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.697 [2024-07-24 18:17:15.735533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.697 [2024-07-24 18:17:15.735547] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.697 [2024-07-24 18:17:15.735704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.697 [2024-07-24 18:17:15.735710] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.697 [2024-07-24 18:17:15.735713] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735716] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.735721] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:22.697 [2024-07-24 18:17:15.735725] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:22.697 [2024-07-24 18:17:15.735733] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735736] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735739] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.697 [2024-07-24 18:17:15.735745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.697 [2024-07-24 18:17:15.735754] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.697 [2024-07-24 18:17:15.735854] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.697 [2024-07-24 18:17:15.735860] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.697 [2024-07-24 18:17:15.735863] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735866] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.735874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735878] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.735881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.697 [2024-07-24 18:17:15.735887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.697 [2024-07-24 18:17:15.735898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.697 [2024-07-24 18:17:15.736006] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.697 [2024-07-24 18:17:15.736012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.697 [2024-07-24 18:17:15.736015] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.736025] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.697 [2024-07-24 18:17:15.736037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.697 [2024-07-24 18:17:15.736047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.697 [2024-07-24 18:17:15.736111] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.697 [2024-07-24 18:17:15.736117] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.697 [2024-07-24 18:17:15.736120] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736123] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.736131] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736134] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736137] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.697 [2024-07-24 18:17:15.736142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.697 [2024-07-24 18:17:15.736151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.697 [2024-07-24 18:17:15.736257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.697 [2024-07-24 18:17:15.736262] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.697 [2024-07-24 18:17:15.736265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736268] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.697 [2024-07-24 18:17:15.736276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.697 [2024-07-24 18:17:15.736282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.697 [2024-07-24 18:17:15.736288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.697 [2024-07-24 18:17:15.736296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.697 [2024-07-24 18:17:15.736408] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.697 [2024-07-24 18:17:15.736414] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.697 [2024-07-24 18:17:15.736416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736419] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.736427] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736434] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.736439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.736448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.736561] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.736567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.736570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.736581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736584] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.736593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.736602] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.736665] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.736670] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.736674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.736684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.736696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.736705] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.736840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.736845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.736848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736851] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.736858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.736870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.736879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.736963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.736968] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.736971] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.736982] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.736988] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.736994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.737003] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.737115] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.737120] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.737123] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737126] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.737134] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737140] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.737146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.737155] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.737219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.737225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.737228] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.737238] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737245] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.737250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.737259] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.737367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.737372] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.737375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.737386] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737389] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.737397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.737406] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.737518] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.737524] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.737526] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737530] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.737537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737541] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737544] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.698 [2024-07-24 18:17:15.737549] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.698 [2024-07-24 18:17:15.737559] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.698 [2024-07-24 18:17:15.737668] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.698 [2024-07-24 18:17:15.737674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.698 [2024-07-24 18:17:15.737679] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.698 [2024-07-24 18:17:15.737682] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.698 [2024-07-24 18:17:15.737689] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737693] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737696] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.737701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.737711] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.737784] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.737790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.737792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737795] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.737804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737810] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.737815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.737825] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.737921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.737926] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.737929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737932] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.737940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.737946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.737952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.737961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.738070] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.738076] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.738079] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738082] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.738089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738093] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738096] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.738101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.738110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.738223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.738229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.738232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.738246] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738250] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.738259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.738268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.738330] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.738336] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.738339] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738342] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.738349] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738356] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.738361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.738370] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.738476] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.738481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.738484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.738499] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738503] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.738511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.738520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.738627] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.738633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.738636] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.738647] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738650] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.738659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.738668] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.738778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.738784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.738787] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738790] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.738799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.738811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.738820] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.738885] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.738890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.738893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738896] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.738904] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738907] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.738910] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.738916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.738925] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.739030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.739035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.739038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.739041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.739049] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.739052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.739055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.739061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.739070] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.739179] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.739185] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.739187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.739190] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.699 [2024-07-24 18:17:15.739198] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.739202] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.699 [2024-07-24 18:17:15.739205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.699 [2024-07-24 18:17:15.739210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.699 [2024-07-24 18:17:15.739219] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.699 [2024-07-24 18:17:15.739332] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.699 [2024-07-24 18:17:15.739338] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.699 [2024-07-24 18:17:15.739341] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.739344] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.700 [2024-07-24 18:17:15.739351] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.739356] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.739359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.700 [2024-07-24 18:17:15.739365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.700 [2024-07-24 18:17:15.739374] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.700 [2024-07-24 18:17:15.739439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.700 [2024-07-24 18:17:15.739445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.700 [2024-07-24 18:17:15.739447] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.739450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.700 [2024-07-24 18:17:15.739458] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.739462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.739465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.700 [2024-07-24 18:17:15.739470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.700 [2024-07-24 18:17:15.739479] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.700 [2024-07-24 18:17:15.743497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.700 [2024-07-24 18:17:15.743506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.700 [2024-07-24 18:17:15.743509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.743512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.700 [2024-07-24 18:17:15.743522] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.743526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.743529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x142cec0) 00:22:22.700 [2024-07-24 18:17:15.743535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.700 [2024-07-24 18:17:15.743546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14b02c0, cid 3, qid 0 00:22:22.700 [2024-07-24 18:17:15.743763] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.700 [2024-07-24 18:17:15.743768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.700 [2024-07-24 18:17:15.743771] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.700 [2024-07-24 18:17:15.743774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x14b02c0) on tqpair=0x142cec0 00:22:22.700 [2024-07-24 18:17:15.743782] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:22:22.700 00:22:22.700 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:22.961 [2024-07-24 18:17:15.779499] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:22:22.961 [2024-07-24 18:17:15.779532] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3484932 ] 00:22:22.961 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.961 [2024-07-24 18:17:15.808505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:22.961 [2024-07-24 18:17:15.808546] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:22.961 [2024-07-24 18:17:15.808550] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:22.961 [2024-07-24 18:17:15.808560] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:22.961 [2024-07-24 18:17:15.808567] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:22.961 [2024-07-24 18:17:15.808839] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:22.961 [2024-07-24 18:17:15.808862] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc47ec0 0 00:22:22.961 [2024-07-24 18:17:15.822496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:22.961 [2024-07-24 18:17:15.822511] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:22.961 [2024-07-24 18:17:15.822515] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:22.961 [2024-07-24 18:17:15.822518] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:22.961 [2024-07-24 18:17:15.822549] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.961 [2024-07-24 18:17:15.822553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.961 [2024-07-24 18:17:15.822557] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.961 [2024-07-24 18:17:15.822568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:22.961 [2024-07-24 18:17:15.822583] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.961 [2024-07-24 18:17:15.829498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.961 [2024-07-24 18:17:15.829506] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.961 [2024-07-24 18:17:15.829509] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.961 [2024-07-24 18:17:15.829513] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.961 [2024-07-24 18:17:15.829521] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:22.961 [2024-07-24 18:17:15.829526] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:22.961 [2024-07-24 18:17:15.829530] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:22.961 [2024-07-24 18:17:15.829540] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.961 [2024-07-24 18:17:15.829544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.961 [2024-07-24 18:17:15.829547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.961 [2024-07-24 18:17:15.829553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.961 [2024-07-24 18:17:15.829566] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.961 [2024-07-24 18:17:15.829724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.961 [2024-07-24 18:17:15.829730] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.961 [2024-07-24 18:17:15.829733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.961 [2024-07-24 18:17:15.829736] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.961 [2024-07-24 18:17:15.829742] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:22.961 [2024-07-24 18:17:15.829748] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:22.961 [2024-07-24 18:17:15.829754] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829757] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829763] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.829769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.962 [2024-07-24 18:17:15.829779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.962 [2024-07-24 18:17:15.829850] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.962 [2024-07-24 18:17:15.829855] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.962 [2024-07-24 18:17:15.829858] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.962 [2024-07-24 18:17:15.829865] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:22.962 [2024-07-24 18:17:15.829871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:22.962 [2024-07-24 18:17:15.829877] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.829888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.962 [2024-07-24 18:17:15.829897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.962 [2024-07-24 18:17:15.829968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.962 [2024-07-24 18:17:15.829973] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.962 [2024-07-24 18:17:15.829976] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829979] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.962 [2024-07-24 18:17:15.829983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:22.962 [2024-07-24 18:17:15.829991] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829994] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.829997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.830003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.962 [2024-07-24 18:17:15.830011] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.962 [2024-07-24 18:17:15.830073] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.962 [2024-07-24 18:17:15.830079] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.962 [2024-07-24 18:17:15.830082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830085] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.962 [2024-07-24 18:17:15.830088] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:22.962 [2024-07-24 18:17:15.830092] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:22.962 [2024-07-24 18:17:15.830099] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:22.962 [2024-07-24 18:17:15.830204] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:22.962 [2024-07-24 18:17:15.830207] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:22.962 [2024-07-24 18:17:15.830215] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830218] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.830227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.962 [2024-07-24 18:17:15.830236] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.962 [2024-07-24 18:17:15.830302] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.962 [2024-07-24 18:17:15.830307] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.962 [2024-07-24 18:17:15.830310] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830313] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.962 [2024-07-24 18:17:15.830317] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:22.962 [2024-07-24 18:17:15.830324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830330] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.830336] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.962 [2024-07-24 18:17:15.830345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.962 [2024-07-24 18:17:15.830420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.962 [2024-07-24 18:17:15.830425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.962 [2024-07-24 18:17:15.830428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830431] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.962 [2024-07-24 18:17:15.830435] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:22.962 [2024-07-24 18:17:15.830438] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:22.962 [2024-07-24 18:17:15.830445] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:22.962 [2024-07-24 18:17:15.830455] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:22.962 [2024-07-24 18:17:15.830462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.830471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.962 [2024-07-24 18:17:15.830480] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.962 [2024-07-24 18:17:15.830582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.962 [2024-07-24 18:17:15.830588] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.962 [2024-07-24 18:17:15.830591] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830594] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=4096, cccid=0 00:22:22.962 [2024-07-24 18:17:15.830598] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccae40) on tqpair(0xc47ec0): expected_datao=0, payload_size=4096 00:22:22.962 [2024-07-24 18:17:15.830601] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830620] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.830624] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871625] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.962 [2024-07-24 18:17:15.871635] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.962 [2024-07-24 18:17:15.871638] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.962 [2024-07-24 18:17:15.871648] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:22.962 [2024-07-24 18:17:15.871652] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:22.962 [2024-07-24 18:17:15.871656] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:22.962 [2024-07-24 18:17:15.871659] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:22.962 [2024-07-24 18:17:15.871663] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:22.962 [2024-07-24 18:17:15.871667] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:22.962 [2024-07-24 18:17:15.871675] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:22.962 [2024-07-24 18:17:15.871684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.871697] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.962 [2024-07-24 18:17:15.871709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.962 [2024-07-24 18:17:15.871791] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.962 [2024-07-24 18:17:15.871796] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.962 [2024-07-24 18:17:15.871799] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.962 [2024-07-24 18:17:15.871808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.871819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.962 [2024-07-24 18:17:15.871824] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871827] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871830] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc47ec0) 00:22:22.962 [2024-07-24 18:17:15.871834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.962 [2024-07-24 18:17:15.871839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.962 [2024-07-24 18:17:15.871842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.871845] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc47ec0) 00:22:22.963 [2024-07-24 18:17:15.871850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.963 [2024-07-24 18:17:15.871854] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.871860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.871863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.963 [2024-07-24 18:17:15.871867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.963 [2024-07-24 18:17:15.871871] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.871880] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.871886] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.871889] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc47ec0) 00:22:22.963 [2024-07-24 18:17:15.871894] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.963 [2024-07-24 18:17:15.871905] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccae40, cid 0, qid 0 00:22:22.963 [2024-07-24 18:17:15.871909] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccafc0, cid 1, qid 0 00:22:22.963 [2024-07-24 18:17:15.871913] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb140, cid 2, qid 0 00:22:22.963 [2024-07-24 18:17:15.871917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.963 [2024-07-24 18:17:15.871921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb440, cid 4, qid 0 00:22:22.963 [2024-07-24 18:17:15.872020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.963 [2024-07-24 18:17:15.872026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.963 [2024-07-24 18:17:15.872029] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872032] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb440) on tqpair=0xc47ec0 00:22:22.963 [2024-07-24 18:17:15.872035] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:22.963 [2024-07-24 18:17:15.872039] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872053] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872058] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872062] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872064] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc47ec0) 00:22:22.963 [2024-07-24 18:17:15.872070] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:22.963 [2024-07-24 18:17:15.872079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb440, cid 4, qid 0 00:22:22.963 [2024-07-24 18:17:15.872147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.963 [2024-07-24 18:17:15.872152] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.963 [2024-07-24 18:17:15.872155] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872158] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb440) on tqpair=0xc47ec0 00:22:22.963 [2024-07-24 18:17:15.872211] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872219] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872227] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc47ec0) 00:22:22.963 [2024-07-24 18:17:15.872236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.963 [2024-07-24 18:17:15.872245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb440, cid 4, qid 0 00:22:22.963 [2024-07-24 18:17:15.872324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.963 [2024-07-24 18:17:15.872329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.963 [2024-07-24 18:17:15.872332] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872335] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=4096, cccid=4 00:22:22.963 [2024-07-24 18:17:15.872339] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb440) on tqpair(0xc47ec0): expected_datao=0, payload_size=4096 00:22:22.963 [2024-07-24 18:17:15.872342] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872348] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872351] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.963 [2024-07-24 18:17:15.872375] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.963 [2024-07-24 18:17:15.872378] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb440) on tqpair=0xc47ec0 00:22:22.963 [2024-07-24 18:17:15.872389] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:22.963 [2024-07-24 18:17:15.872396] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872404] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc47ec0) 00:22:22.963 [2024-07-24 18:17:15.872418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.963 [2024-07-24 18:17:15.872427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb440, cid 4, qid 0 00:22:22.963 [2024-07-24 18:17:15.872511] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.963 [2024-07-24 18:17:15.872517] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.963 [2024-07-24 18:17:15.872520] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872523] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=4096, cccid=4 00:22:22.963 [2024-07-24 18:17:15.872527] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb440) on tqpair(0xc47ec0): expected_datao=0, payload_size=4096 00:22:22.963 [2024-07-24 18:17:15.872530] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872536] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872539] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.963 [2024-07-24 18:17:15.872557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.963 [2024-07-24 18:17:15.872560] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872563] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb440) on tqpair=0xc47ec0 00:22:22.963 [2024-07-24 18:17:15.872576] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872584] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872594] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc47ec0) 00:22:22.963 [2024-07-24 18:17:15.872599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.963 [2024-07-24 18:17:15.872609] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb440, cid 4, qid 0 00:22:22.963 [2024-07-24 18:17:15.872688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.963 [2024-07-24 18:17:15.872694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.963 [2024-07-24 18:17:15.872696] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872699] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=4096, cccid=4 00:22:22.963 [2024-07-24 18:17:15.872703] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb440) on tqpair(0xc47ec0): expected_datao=0, payload_size=4096 00:22:22.963 [2024-07-24 18:17:15.872706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872712] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872715] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872733] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.963 [2024-07-24 18:17:15.872738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.963 [2024-07-24 18:17:15.872741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.963 [2024-07-24 18:17:15.872744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb440) on tqpair=0xc47ec0 00:22:22.963 [2024-07-24 18:17:15.872750] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872756] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872764] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872770] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872775] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872779] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:22.963 [2024-07-24 18:17:15.872783] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:22.963 [2024-07-24 18:17:15.872787] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:22.964 [2024-07-24 18:17:15.872791] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:22.964 [2024-07-24 18:17:15.872803] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.872807] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.872812] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.872817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.872822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.872825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.872830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:22.964 [2024-07-24 18:17:15.872842] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb440, cid 4, qid 0 00:22:22.964 [2024-07-24 18:17:15.872846] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb5c0, cid 5, qid 0 00:22:22.964 [2024-07-24 18:17:15.872932] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.872938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.872941] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.872944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb440) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.872949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.872953] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.872956] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.872959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb5c0) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.872967] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.872970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.872975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.872984] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb5c0, cid 5, qid 0 00:22:22.964 [2024-07-24 18:17:15.873052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.873058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.873061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb5c0) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.873071] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.873079] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.873088] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb5c0, cid 5, qid 0 00:22:22.964 [2024-07-24 18:17:15.873156] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.873162] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.873165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873168] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb5c0) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.873175] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.873184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.873193] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb5c0, cid 5, qid 0 00:22:22.964 [2024-07-24 18:17:15.873254] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.873260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.873263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873267] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb5c0) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.873279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873282] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.873288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.873293] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873297] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.873302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.873307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.873315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.873321] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.873324] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc47ec0) 00:22:22.964 [2024-07-24 18:17:15.873329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.964 [2024-07-24 18:17:15.873339] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb5c0, cid 5, qid 0 00:22:22.964 [2024-07-24 18:17:15.873343] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb440, cid 4, qid 0 00:22:22.964 [2024-07-24 18:17:15.873347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb740, cid 6, qid 0 00:22:22.964 [2024-07-24 18:17:15.873351] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb8c0, cid 7, qid 0 00:22:22.964 [2024-07-24 18:17:15.873486] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.964 [2024-07-24 18:17:15.877496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.964 [2024-07-24 18:17:15.877502] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877504] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=8192, cccid=5 00:22:22.964 [2024-07-24 18:17:15.877508] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb5c0) on tqpair(0xc47ec0): expected_datao=0, payload_size=8192 00:22:22.964 [2024-07-24 18:17:15.877512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877523] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877527] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.964 [2024-07-24 18:17:15.877539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.964 [2024-07-24 18:17:15.877542] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877545] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=512, cccid=4 00:22:22.964 [2024-07-24 18:17:15.877549] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb440) on tqpair(0xc47ec0): expected_datao=0, payload_size=512 00:22:22.964 [2024-07-24 18:17:15.877552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877558] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877561] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877565] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.964 [2024-07-24 18:17:15.877572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.964 [2024-07-24 18:17:15.877575] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877578] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=512, cccid=6 00:22:22.964 [2024-07-24 18:17:15.877581] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb740) on tqpair(0xc47ec0): expected_datao=0, payload_size=512 00:22:22.964 [2024-07-24 18:17:15.877585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877590] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877593] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877598] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:22.964 [2024-07-24 18:17:15.877602] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:22.964 [2024-07-24 18:17:15.877605] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877608] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc47ec0): datao=0, datal=4096, cccid=7 00:22:22.964 [2024-07-24 18:17:15.877612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xccb8c0) on tqpair(0xc47ec0): expected_datao=0, payload_size=4096 00:22:22.964 [2024-07-24 18:17:15.877615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877620] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877624] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.877633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.877635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877639] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb5c0) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.877649] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.877653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.877656] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877660] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb440) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.877667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.877672] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.964 [2024-07-24 18:17:15.877675] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.964 [2024-07-24 18:17:15.877678] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb740) on tqpair=0xc47ec0 00:22:22.964 [2024-07-24 18:17:15.877684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.964 [2024-07-24 18:17:15.877688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.965 [2024-07-24 18:17:15.877691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.965 [2024-07-24 18:17:15.877694] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb8c0) on tqpair=0xc47ec0 00:22:22.965 ===================================================== 00:22:22.965 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:22.965 ===================================================== 00:22:22.965 Controller Capabilities/Features 00:22:22.965 ================================ 00:22:22.965 Vendor ID: 8086 00:22:22.965 Subsystem Vendor ID: 8086 00:22:22.965 Serial Number: SPDK00000000000001 00:22:22.965 Model Number: SPDK bdev Controller 00:22:22.965 Firmware Version: 24.09 00:22:22.965 Recommended Arb Burst: 6 00:22:22.965 IEEE OUI Identifier: e4 d2 5c 00:22:22.965 Multi-path I/O 00:22:22.965 May have multiple subsystem ports: Yes 00:22:22.965 May have multiple controllers: Yes 00:22:22.965 Associated with SR-IOV VF: No 00:22:22.965 Max Data Transfer Size: 131072 00:22:22.965 Max Number of Namespaces: 32 00:22:22.965 Max Number of I/O Queues: 127 00:22:22.965 NVMe Specification Version (VS): 1.3 00:22:22.965 NVMe Specification Version (Identify): 1.3 00:22:22.965 Maximum Queue Entries: 128 00:22:22.965 Contiguous Queues Required: Yes 00:22:22.965 Arbitration Mechanisms Supported 00:22:22.965 Weighted Round Robin: Not Supported 00:22:22.965 Vendor Specific: Not Supported 00:22:22.965 Reset Timeout: 15000 ms 00:22:22.965 Doorbell Stride: 4 bytes 00:22:22.965 NVM Subsystem Reset: Not Supported 00:22:22.965 Command Sets Supported 00:22:22.965 NVM Command Set: Supported 00:22:22.965 Boot Partition: Not Supported 00:22:22.965 Memory Page Size Minimum: 4096 bytes 00:22:22.965 Memory Page Size Maximum: 4096 bytes 00:22:22.965 Persistent Memory Region: Not Supported 00:22:22.965 Optional Asynchronous Events Supported 00:22:22.965 Namespace Attribute Notices: Supported 00:22:22.965 Firmware Activation Notices: Not Supported 00:22:22.965 ANA Change Notices: Not Supported 00:22:22.965 PLE Aggregate Log Change Notices: Not Supported 00:22:22.965 LBA Status Info Alert Notices: Not Supported 00:22:22.965 EGE Aggregate Log Change Notices: Not Supported 00:22:22.965 Normal NVM Subsystem Shutdown event: Not Supported 00:22:22.965 Zone Descriptor Change Notices: Not Supported 00:22:22.965 Discovery Log Change Notices: Not Supported 00:22:22.965 Controller Attributes 00:22:22.965 128-bit Host Identifier: Supported 00:22:22.965 Non-Operational Permissive Mode: Not Supported 00:22:22.965 NVM Sets: Not Supported 00:22:22.965 Read Recovery Levels: Not Supported 00:22:22.965 Endurance Groups: Not Supported 00:22:22.965 Predictable Latency Mode: Not Supported 00:22:22.965 Traffic Based Keep ALive: Not Supported 00:22:22.965 Namespace Granularity: Not Supported 00:22:22.965 SQ Associations: Not Supported 00:22:22.965 UUID List: Not Supported 00:22:22.965 Multi-Domain Subsystem: Not Supported 00:22:22.965 Fixed Capacity Management: Not Supported 00:22:22.965 Variable Capacity Management: Not Supported 00:22:22.965 Delete Endurance Group: Not Supported 00:22:22.965 Delete NVM Set: Not Supported 00:22:22.965 Extended LBA Formats Supported: Not Supported 00:22:22.965 Flexible Data Placement Supported: Not Supported 00:22:22.965 00:22:22.965 Controller Memory Buffer Support 00:22:22.965 ================================ 00:22:22.965 Supported: No 00:22:22.965 00:22:22.965 Persistent Memory Region Support 00:22:22.965 ================================ 00:22:22.965 Supported: No 00:22:22.965 00:22:22.965 Admin Command Set Attributes 00:22:22.965 ============================ 00:22:22.965 Security Send/Receive: Not Supported 00:22:22.965 Format NVM: Not Supported 00:22:22.965 Firmware Activate/Download: Not Supported 00:22:22.965 Namespace Management: Not Supported 00:22:22.965 Device Self-Test: Not Supported 00:22:22.965 Directives: Not Supported 00:22:22.965 NVMe-MI: Not Supported 00:22:22.965 Virtualization Management: Not Supported 00:22:22.965 Doorbell Buffer Config: Not Supported 00:22:22.965 Get LBA Status Capability: Not Supported 00:22:22.965 Command & Feature Lockdown Capability: Not Supported 00:22:22.965 Abort Command Limit: 4 00:22:22.965 Async Event Request Limit: 4 00:22:22.965 Number of Firmware Slots: N/A 00:22:22.965 Firmware Slot 1 Read-Only: N/A 00:22:22.965 Firmware Activation Without Reset: N/A 00:22:22.965 Multiple Update Detection Support: N/A 00:22:22.965 Firmware Update Granularity: No Information Provided 00:22:22.965 Per-Namespace SMART Log: No 00:22:22.965 Asymmetric Namespace Access Log Page: Not Supported 00:22:22.965 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:22.965 Command Effects Log Page: Supported 00:22:22.965 Get Log Page Extended Data: Supported 00:22:22.965 Telemetry Log Pages: Not Supported 00:22:22.965 Persistent Event Log Pages: Not Supported 00:22:22.965 Supported Log Pages Log Page: May Support 00:22:22.965 Commands Supported & Effects Log Page: Not Supported 00:22:22.965 Feature Identifiers & Effects Log Page:May Support 00:22:22.965 NVMe-MI Commands & Effects Log Page: May Support 00:22:22.965 Data Area 4 for Telemetry Log: Not Supported 00:22:22.965 Error Log Page Entries Supported: 128 00:22:22.965 Keep Alive: Supported 00:22:22.965 Keep Alive Granularity: 10000 ms 00:22:22.965 00:22:22.965 NVM Command Set Attributes 00:22:22.965 ========================== 00:22:22.965 Submission Queue Entry Size 00:22:22.965 Max: 64 00:22:22.965 Min: 64 00:22:22.965 Completion Queue Entry Size 00:22:22.965 Max: 16 00:22:22.965 Min: 16 00:22:22.965 Number of Namespaces: 32 00:22:22.965 Compare Command: Supported 00:22:22.965 Write Uncorrectable Command: Not Supported 00:22:22.965 Dataset Management Command: Supported 00:22:22.965 Write Zeroes Command: Supported 00:22:22.965 Set Features Save Field: Not Supported 00:22:22.965 Reservations: Supported 00:22:22.965 Timestamp: Not Supported 00:22:22.965 Copy: Supported 00:22:22.965 Volatile Write Cache: Present 00:22:22.965 Atomic Write Unit (Normal): 1 00:22:22.965 Atomic Write Unit (PFail): 1 00:22:22.965 Atomic Compare & Write Unit: 1 00:22:22.965 Fused Compare & Write: Supported 00:22:22.965 Scatter-Gather List 00:22:22.965 SGL Command Set: Supported 00:22:22.965 SGL Keyed: Supported 00:22:22.965 SGL Bit Bucket Descriptor: Not Supported 00:22:22.965 SGL Metadata Pointer: Not Supported 00:22:22.965 Oversized SGL: Not Supported 00:22:22.965 SGL Metadata Address: Not Supported 00:22:22.965 SGL Offset: Supported 00:22:22.965 Transport SGL Data Block: Not Supported 00:22:22.965 Replay Protected Memory Block: Not Supported 00:22:22.965 00:22:22.965 Firmware Slot Information 00:22:22.965 ========================= 00:22:22.965 Active slot: 1 00:22:22.965 Slot 1 Firmware Revision: 24.09 00:22:22.965 00:22:22.965 00:22:22.965 Commands Supported and Effects 00:22:22.965 ============================== 00:22:22.965 Admin Commands 00:22:22.965 -------------- 00:22:22.965 Get Log Page (02h): Supported 00:22:22.965 Identify (06h): Supported 00:22:22.965 Abort (08h): Supported 00:22:22.965 Set Features (09h): Supported 00:22:22.965 Get Features (0Ah): Supported 00:22:22.965 Asynchronous Event Request (0Ch): Supported 00:22:22.965 Keep Alive (18h): Supported 00:22:22.965 I/O Commands 00:22:22.965 ------------ 00:22:22.965 Flush (00h): Supported LBA-Change 00:22:22.965 Write (01h): Supported LBA-Change 00:22:22.965 Read (02h): Supported 00:22:22.965 Compare (05h): Supported 00:22:22.965 Write Zeroes (08h): Supported LBA-Change 00:22:22.965 Dataset Management (09h): Supported LBA-Change 00:22:22.965 Copy (19h): Supported LBA-Change 00:22:22.965 00:22:22.965 Error Log 00:22:22.965 ========= 00:22:22.965 00:22:22.965 Arbitration 00:22:22.965 =========== 00:22:22.965 Arbitration Burst: 1 00:22:22.965 00:22:22.965 Power Management 00:22:22.965 ================ 00:22:22.965 Number of Power States: 1 00:22:22.965 Current Power State: Power State #0 00:22:22.965 Power State #0: 00:22:22.965 Max Power: 0.00 W 00:22:22.965 Non-Operational State: Operational 00:22:22.965 Entry Latency: Not Reported 00:22:22.965 Exit Latency: Not Reported 00:22:22.965 Relative Read Throughput: 0 00:22:22.965 Relative Read Latency: 0 00:22:22.965 Relative Write Throughput: 0 00:22:22.965 Relative Write Latency: 0 00:22:22.965 Idle Power: Not Reported 00:22:22.965 Active Power: Not Reported 00:22:22.965 Non-Operational Permissive Mode: Not Supported 00:22:22.965 00:22:22.965 Health Information 00:22:22.966 ================== 00:22:22.966 Critical Warnings: 00:22:22.966 Available Spare Space: OK 00:22:22.966 Temperature: OK 00:22:22.966 Device Reliability: OK 00:22:22.966 Read Only: No 00:22:22.966 Volatile Memory Backup: OK 00:22:22.966 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:22.966 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:22.966 Available Spare: 0% 00:22:22.966 Available Spare Threshold: 0% 00:22:22.966 Life Percentage Used:[2024-07-24 18:17:15.877774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.877778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.877785] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.877797] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb8c0, cid 7, qid 0 00:22:22.966 [2024-07-24 18:17:15.877970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.966 [2024-07-24 18:17:15.877976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.966 [2024-07-24 18:17:15.877978] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.877982] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb8c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878009] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:22.966 [2024-07-24 18:17:15.878018] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccae40) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.966 [2024-07-24 18:17:15.878028] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccafc0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.966 [2024-07-24 18:17:15.878036] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb140) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.966 [2024-07-24 18:17:15.878043] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:22.966 [2024-07-24 18:17:15.878054] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878057] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878060] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.878066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.878076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.966 [2024-07-24 18:17:15.878169] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.966 [2024-07-24 18:17:15.878175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.966 [2024-07-24 18:17:15.878178] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878181] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878186] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.878197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.878208] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.966 [2024-07-24 18:17:15.878319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.966 [2024-07-24 18:17:15.878324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.966 [2024-07-24 18:17:15.878327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878334] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:22.966 [2024-07-24 18:17:15.878337] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:22.966 [2024-07-24 18:17:15.878345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.878357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.878365] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.966 [2024-07-24 18:17:15.878429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.966 [2024-07-24 18:17:15.878434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.966 [2024-07-24 18:17:15.878437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878455] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.878460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.878469] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.966 [2024-07-24 18:17:15.878571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.966 [2024-07-24 18:17:15.878577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.966 [2024-07-24 18:17:15.878580] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878591] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878594] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878597] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.878602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.878611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.966 [2024-07-24 18:17:15.878724] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.966 [2024-07-24 18:17:15.878729] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.966 [2024-07-24 18:17:15.878732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878735] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878742] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878748] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.878754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.878763] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.966 [2024-07-24 18:17:15.878873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.966 [2024-07-24 18:17:15.878879] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.966 [2024-07-24 18:17:15.878881] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878884] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.966 [2024-07-24 18:17:15.878892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.966 [2024-07-24 18:17:15.878898] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.966 [2024-07-24 18:17:15.878903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.966 [2024-07-24 18:17:15.878912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.966 [2024-07-24 18:17:15.878974] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.878981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.878984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.878987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.878994] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.878998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879001] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.879006] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.879015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.879127] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.879132] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.879135] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.879146] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879149] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.879157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.879166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.879278] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.879283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.879286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.879297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.879308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.879317] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.879429] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.879434] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.879437] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.879448] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879451] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879454] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.879460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.879468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.879536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.879542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.879546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.879557] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.879568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.879578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.879680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.879685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.879688] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.879699] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879702] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879705] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.879710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.879719] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.879831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.879836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.879839] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879842] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.879849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879853] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.879861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.879870] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.879982] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.879987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.879990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.879993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.880000] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880004] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.880012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.880021] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.880087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.880092] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.880095] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880100] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.880107] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880111] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880114] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.880119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.880128] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.880235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.880240] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.880243] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.880254] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880257] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880260] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.880265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.880274] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.880386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.880391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.880394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880397] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.880405] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.880416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.967 [2024-07-24 18:17:15.880425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.967 [2024-07-24 18:17:15.880537] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.967 [2024-07-24 18:17:15.880543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.967 [2024-07-24 18:17:15.880546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.967 [2024-07-24 18:17:15.880556] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.967 [2024-07-24 18:17:15.880563] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.967 [2024-07-24 18:17:15.880568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.968 [2024-07-24 18:17:15.880577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.968 [2024-07-24 18:17:15.880642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.968 [2024-07-24 18:17:15.880647] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.968 [2024-07-24 18:17:15.880650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.968 [2024-07-24 18:17:15.880653] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.968 [2024-07-24 18:17:15.880663] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.968 [2024-07-24 18:17:15.880666] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.968 [2024-07-24 18:17:15.880669] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.968 [2024-07-24 18:17:15.880674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.968 [2024-07-24 18:17:15.880683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.968 [2024-07-24 18:17:15.884496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.968 [2024-07-24 18:17:15.884504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.968 [2024-07-24 18:17:15.884507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.968 [2024-07-24 18:17:15.884510] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.968 [2024-07-24 18:17:15.884519] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:22.968 [2024-07-24 18:17:15.884522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:22.968 [2024-07-24 18:17:15.884525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc47ec0) 00:22:22.968 [2024-07-24 18:17:15.884531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:22.968 [2024-07-24 18:17:15.884542] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xccb2c0, cid 3, qid 0 00:22:22.968 [2024-07-24 18:17:15.884697] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:22.968 [2024-07-24 18:17:15.884703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:22.968 [2024-07-24 18:17:15.884705] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:22.968 [2024-07-24 18:17:15.884709] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xccb2c0) on tqpair=0xc47ec0 00:22:22.968 [2024-07-24 18:17:15.884715] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:22.968 0% 00:22:22.968 Data Units Read: 0 00:22:22.968 Data Units Written: 0 00:22:22.968 Host Read Commands: 0 00:22:22.968 Host Write Commands: 0 00:22:22.968 Controller Busy Time: 0 minutes 00:22:22.968 Power Cycles: 0 00:22:22.968 Power On Hours: 0 hours 00:22:22.968 Unsafe Shutdowns: 0 00:22:22.968 Unrecoverable Media Errors: 0 00:22:22.968 Lifetime Error Log Entries: 0 00:22:22.968 Warning Temperature Time: 0 minutes 00:22:22.968 Critical Temperature Time: 0 minutes 00:22:22.968 00:22:22.968 Number of Queues 00:22:22.968 ================ 00:22:22.968 Number of I/O Submission Queues: 127 00:22:22.968 Number of I/O Completion Queues: 127 00:22:22.968 00:22:22.968 Active Namespaces 00:22:22.968 ================= 00:22:22.968 Namespace ID:1 00:22:22.968 Error Recovery Timeout: Unlimited 00:22:22.968 Command Set Identifier: NVM (00h) 00:22:22.968 Deallocate: Supported 00:22:22.968 Deallocated/Unwritten Error: Not Supported 00:22:22.968 Deallocated Read Value: Unknown 00:22:22.968 Deallocate in Write Zeroes: Not Supported 00:22:22.968 Deallocated Guard Field: 0xFFFF 00:22:22.968 Flush: Supported 00:22:22.968 Reservation: Supported 00:22:22.968 Namespace Sharing Capabilities: Multiple Controllers 00:22:22.968 Size (in LBAs): 131072 (0GiB) 00:22:22.968 Capacity (in LBAs): 131072 (0GiB) 00:22:22.968 Utilization (in LBAs): 131072 (0GiB) 00:22:22.968 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:22.968 EUI64: ABCDEF0123456789 00:22:22.968 UUID: d9e616b7-28c7-4761-89d6-48253af49446 00:22:22.968 Thin Provisioning: Not Supported 00:22:22.968 Per-NS Atomic Units: Yes 00:22:22.968 Atomic Boundary Size (Normal): 0 00:22:22.968 Atomic Boundary Size (PFail): 0 00:22:22.968 Atomic Boundary Offset: 0 00:22:22.968 Maximum Single Source Range Length: 65535 00:22:22.968 Maximum Copy Length: 65535 00:22:22.968 Maximum Source Range Count: 1 00:22:22.968 NGUID/EUI64 Never Reused: No 00:22:22.968 Namespace Write Protected: No 00:22:22.968 Number of LBA Formats: 1 00:22:22.968 Current LBA Format: LBA Format #00 00:22:22.968 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:22.968 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:22.968 rmmod nvme_tcp 00:22:22.968 rmmod nvme_fabrics 00:22:22.968 rmmod nvme_keyring 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3484683 ']' 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3484683 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 3484683 ']' 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 3484683 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:22.968 18:17:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3484683 00:22:22.968 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:22.968 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:22.968 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3484683' 00:22:22.968 killing process with pid 3484683 00:22:22.968 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 3484683 00:22:22.968 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 3484683 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:23.226 18:17:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:25.751 00:22:25.751 real 0m9.132s 00:22:25.751 user 0m7.259s 00:22:25.751 sys 0m4.336s 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.751 ************************************ 00:22:25.751 END TEST nvmf_identify 00:22:25.751 ************************************ 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:25.751 ************************************ 00:22:25.751 START TEST nvmf_perf 00:22:25.751 ************************************ 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:25.751 * Looking for test storage... 00:22:25.751 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:25.751 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:25.752 18:17:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.011 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:31.011 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.012 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:31.012 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.012 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:22:31.012 00:22:31.012 --- 10.0.0.2 ping statistics --- 00:22:31.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.012 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:31.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:22:31.012 00:22:31.012 --- 10.0.0.1 ping statistics --- 00:22:31.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.012 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3488390 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3488390 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 3488390 ']' 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:31.012 18:17:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:31.012 [2024-07-24 18:17:23.969740] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:22:31.012 [2024-07-24 18:17:23.969782] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.012 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.012 [2024-07-24 18:17:24.028452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:31.270 [2024-07-24 18:17:24.109651] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.270 [2024-07-24 18:17:24.109684] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.270 [2024-07-24 18:17:24.109691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.270 [2024-07-24 18:17:24.109700] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.270 [2024-07-24 18:17:24.109705] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.270 [2024-07-24 18:17:24.109740] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.270 [2024-07-24 18:17:24.109837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.270 [2024-07-24 18:17:24.109929] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:31.270 [2024-07-24 18:17:24.109930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:31.834 18:17:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:35.170 18:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:35.171 18:17:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:35.171 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5f:00.0 00:22:35.171 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:35.171 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:35.171 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5f:00.0 ']' 00:22:35.171 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:35.171 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:35.171 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.427 [2024-07-24 18:17:28.388268] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.427 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:35.684 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:35.684 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:35.941 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:35.941 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:35.941 18:17:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:36.198 [2024-07-24 18:17:29.101009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.198 18:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:36.455 18:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5f:00.0 ']' 00:22:36.455 18:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:22:36.455 18:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:36.455 18:17:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5f:00.0' 00:22:37.827 Initializing NVMe Controllers 00:22:37.827 Attached to NVMe Controller at 0000:5f:00.0 [8086:0a54] 00:22:37.827 Associating PCIE (0000:5f:00.0) NSID 1 with lcore 0 00:22:37.827 Initialization complete. Launching workers. 00:22:37.827 ======================================================== 00:22:37.827 Latency(us) 00:22:37.827 Device Information : IOPS MiB/s Average min max 00:22:37.827 PCIE (0000:5f:00.0) NSID 1 from core 0: 99742.74 389.62 320.35 34.67 4395.76 00:22:37.827 ======================================================== 00:22:37.827 Total : 99742.74 389.62 320.35 34.67 4395.76 00:22:37.827 00:22:37.828 18:17:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:37.828 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.760 Initializing NVMe Controllers 00:22:38.760 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:38.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:38.760 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:38.760 Initialization complete. Launching workers. 00:22:38.760 ======================================================== 00:22:38.760 Latency(us) 00:22:38.760 Device Information : IOPS MiB/s Average min max 00:22:38.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 105.63 0.41 9704.43 118.25 44700.00 00:22:38.760 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 44.84 0.18 23009.41 7961.20 47894.50 00:22:38.760 ======================================================== 00:22:38.760 Total : 150.47 0.59 13669.49 118.25 47894.50 00:22:38.760 00:22:38.760 18:17:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:38.760 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.132 Initializing NVMe Controllers 00:22:40.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:40.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:40.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:40.132 Initialization complete. Launching workers. 00:22:40.132 ======================================================== 00:22:40.132 Latency(us) 00:22:40.132 Device Information : IOPS MiB/s Average min max 00:22:40.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11354.00 44.35 2820.53 446.07 6190.40 00:22:40.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3884.00 15.17 8274.82 6693.52 15820.70 00:22:40.132 ======================================================== 00:22:40.132 Total : 15238.00 59.52 4210.77 446.07 15820.70 00:22:40.132 00:22:40.132 18:17:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:40.132 18:17:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:40.132 18:17:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:40.132 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.659 Initializing NVMe Controllers 00:22:42.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.659 Controller IO queue size 128, less than required. 00:22:42.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:42.659 Controller IO queue size 128, less than required. 00:22:42.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:42.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:42.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:42.659 Initialization complete. Launching workers. 00:22:42.659 ======================================================== 00:22:42.659 Latency(us) 00:22:42.659 Device Information : IOPS MiB/s Average min max 00:22:42.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1746.98 436.74 74375.17 47085.80 119152.08 00:22:42.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.49 145.62 224330.15 56334.71 358588.82 00:22:42.659 ======================================================== 00:22:42.659 Total : 2329.47 582.37 111871.96 47085.80 358588.82 00:22:42.659 00:22:42.659 18:17:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:42.659 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.659 No valid NVMe controllers or AIO or URING devices found 00:22:42.659 Initializing NVMe Controllers 00:22:42.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:42.659 Controller IO queue size 128, less than required. 00:22:42.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:42.659 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:42.659 Controller IO queue size 128, less than required. 00:22:42.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:42.660 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:42.660 WARNING: Some requested NVMe devices were skipped 00:22:42.660 18:17:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:42.917 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.445 Initializing NVMe Controllers 00:22:45.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.445 Controller IO queue size 128, less than required. 00:22:45.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.445 Controller IO queue size 128, less than required. 00:22:45.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:45.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:45.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:45.445 Initialization complete. Launching workers. 00:22:45.445 00:22:45.445 ==================== 00:22:45.445 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:45.445 TCP transport: 00:22:45.445 polls: 18382 00:22:45.445 idle_polls: 13289 00:22:45.445 sock_completions: 5093 00:22:45.445 nvme_completions: 6413 00:22:45.445 submitted_requests: 9696 00:22:45.445 queued_requests: 1 00:22:45.445 00:22:45.445 ==================== 00:22:45.445 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:45.445 TCP transport: 00:22:45.445 polls: 14611 00:22:45.445 idle_polls: 7900 00:22:45.445 sock_completions: 6711 00:22:45.445 nvme_completions: 7155 00:22:45.445 submitted_requests: 10628 00:22:45.445 queued_requests: 1 00:22:45.445 ======================================================== 00:22:45.445 Latency(us) 00:22:45.445 Device Information : IOPS MiB/s Average min max 00:22:45.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1602.97 400.74 81919.88 48081.87 128383.00 00:22:45.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1788.46 447.12 72585.39 37048.06 104377.64 00:22:45.445 ======================================================== 00:22:45.445 Total : 3391.43 847.86 76997.36 37048.06 128383.00 00:22:45.445 00:22:45.445 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:45.445 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:45.703 rmmod nvme_tcp 00:22:45.703 rmmod nvme_fabrics 00:22:45.703 rmmod nvme_keyring 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3488390 ']' 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3488390 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 3488390 ']' 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 3488390 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3488390 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3488390' 00:22:45.703 killing process with pid 3488390 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 3488390 00:22:45.703 18:17:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 3488390 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.603 18:17:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:50.135 00:22:50.135 real 0m24.385s 00:22:50.135 user 1m5.945s 00:22:50.135 sys 0m7.449s 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:50.135 ************************************ 00:22:50.135 END TEST nvmf_perf 00:22:50.135 ************************************ 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:50.135 ************************************ 00:22:50.135 START TEST nvmf_fio_host 00:22:50.135 ************************************ 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:50.135 * Looking for test storage... 00:22:50.135 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.135 18:17:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:55.417 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:55.417 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:55.417 Found net devices under 0000:86:00.0: cvl_0_0 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:55.417 Found net devices under 0000:86:00.1: cvl_0_1 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:55.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:55.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:22:55.417 00:22:55.417 --- 10.0.0.2 ping statistics --- 00:22:55.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.417 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:55.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:55.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:22:55.417 00:22:55.417 --- 10.0.0.1 ping statistics --- 00:22:55.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:55.417 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3494542 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3494542 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 3494542 ']' 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:55.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:55.417 18:17:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.417 [2024-07-24 18:17:48.439330] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:22:55.417 [2024-07-24 18:17:48.439371] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:55.417 EAL: No free 2048 kB hugepages reported on node 1 00:22:55.417 [2024-07-24 18:17:48.495616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:55.675 [2024-07-24 18:17:48.574475] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:55.675 [2024-07-24 18:17:48.574518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:55.675 [2024-07-24 18:17:48.574524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:55.675 [2024-07-24 18:17:48.574530] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:55.675 [2024-07-24 18:17:48.574534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:55.675 [2024-07-24 18:17:48.574602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:55.675 [2024-07-24 18:17:48.574695] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.675 [2024-07-24 18:17:48.574784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.675 [2024-07-24 18:17:48.574785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:56.238 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.238 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:22:56.238 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:56.496 [2024-07-24 18:17:49.404098] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.496 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:56.496 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.496 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:56.496 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:56.753 Malloc1 00:22:56.753 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:57.011 18:17:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:57.011 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:57.268 [2024-07-24 18:17:50.170171] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.268 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:57.526 18:17:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:57.795 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:57.795 fio-3.35 00:22:57.795 Starting 1 thread 00:22:57.795 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.322 00:23:00.322 test: (groupid=0, jobs=1): err= 0: pid=3495059: Wed Jul 24 18:17:53 2024 00:23:00.322 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(94.9MiB/2005msec) 00:23:00.322 slat (nsec): min=1564, max=274216, avg=1754.07, stdev=2373.10 00:23:00.322 clat (usec): min=3162, max=10633, avg=5830.73, stdev=434.11 00:23:00.322 lat (usec): min=3197, max=10635, avg=5832.49, stdev=434.01 00:23:00.322 clat percentiles (usec): 00:23:00.322 | 1.00th=[ 4752], 5.00th=[ 5145], 10.00th=[ 5276], 20.00th=[ 5473], 00:23:00.322 | 30.00th=[ 5604], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5932], 00:23:00.322 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6325], 95.00th=[ 6521], 00:23:00.322 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 7963], 99.95th=[ 8717], 00:23:00.322 | 99.99th=[10159] 00:23:00.322 bw ( KiB/s): min=47392, max=49120, per=99.95%, avg=48468.00, stdev=807.37, samples=4 00:23:00.322 iops : min=11848, max=12280, avg=12117.00, stdev=201.84, samples=4 00:23:00.322 write: IOPS=12.1k, BW=47.2MiB/s (49.5MB/s)(94.6MiB/2005msec); 0 zone resets 00:23:00.322 slat (nsec): min=1623, max=254348, avg=1841.05, stdev=1813.06 00:23:00.322 clat (usec): min=2539, max=8642, avg=4704.99, stdev=359.30 00:23:00.322 lat (usec): min=2555, max=8644, avg=4706.83, stdev=359.31 00:23:00.322 clat percentiles (usec): 00:23:00.322 | 1.00th=[ 3851], 5.00th=[ 4146], 10.00th=[ 4293], 20.00th=[ 4424], 00:23:00.322 | 30.00th=[ 4555], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4817], 00:23:00.322 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:23:00.322 | 99.00th=[ 5538], 99.50th=[ 5604], 99.90th=[ 7111], 99.95th=[ 7832], 00:23:00.322 | 99.99th=[ 8586] 00:23:00.322 bw ( KiB/s): min=48064, max=48896, per=100.00%, avg=48304.00, stdev=395.82, samples=4 00:23:00.322 iops : min=12016, max=12224, avg=12076.00, stdev=98.95, samples=4 00:23:00.322 lat (msec) : 4=1.16%, 10=98.83%, 20=0.01% 00:23:00.322 cpu : usr=72.70%, sys=25.60%, ctx=157, majf=0, minf=5 00:23:00.322 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:00.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:00.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:00.322 issued rwts: total=24306,24207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:00.322 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:00.322 00:23:00.322 Run status group 0 (all jobs): 00:23:00.322 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=94.9MiB (99.6MB), run=2005-2005msec 00:23:00.322 WRITE: bw=47.2MiB/s (49.5MB/s), 47.2MiB/s-47.2MiB/s (49.5MB/s-49.5MB/s), io=94.6MiB (99.2MB), run=2005-2005msec 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:00.322 18:17:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:00.322 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:00.322 fio-3.35 00:23:00.322 Starting 1 thread 00:23:00.322 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.854 00:23:02.854 test: (groupid=0, jobs=1): err= 0: pid=3495521: Wed Jul 24 18:17:55 2024 00:23:02.854 read: IOPS=11.0k, BW=171MiB/s (180MB/s)(344MiB/2008msec) 00:23:02.854 slat (nsec): min=2558, max=92123, avg=2870.20, stdev=1284.84 00:23:02.854 clat (usec): min=1607, max=12632, avg=6677.08, stdev=1653.03 00:23:02.854 lat (usec): min=1610, max=12635, avg=6679.95, stdev=1653.13 00:23:02.854 clat percentiles (usec): 00:23:02.854 | 1.00th=[ 3556], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:23:02.854 | 30.00th=[ 5669], 40.00th=[ 6063], 50.00th=[ 6521], 60.00th=[ 7046], 00:23:02.854 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8848], 95.00th=[ 9634], 00:23:02.854 | 99.00th=[11076], 99.50th=[11469], 99.90th=[11994], 99.95th=[12256], 00:23:02.854 | 99.99th=[12518] 00:23:02.854 bw ( KiB/s): min=84576, max=96864, per=52.14%, avg=91496.00, stdev=5756.82, samples=4 00:23:02.854 iops : min= 5286, max= 6054, avg=5718.50, stdev=359.80, samples=4 00:23:02.854 write: IOPS=6433, BW=101MiB/s (105MB/s)(186MiB/1851msec); 0 zone resets 00:23:02.854 slat (usec): min=27, max=384, avg=31.86, stdev= 7.06 00:23:02.854 clat (usec): min=4170, max=14245, avg=8436.30, stdev=1435.10 00:23:02.854 lat (usec): min=4204, max=14357, avg=8468.17, stdev=1436.36 00:23:02.854 clat percentiles (usec): 00:23:02.854 | 1.00th=[ 5800], 5.00th=[ 6325], 10.00th=[ 6718], 20.00th=[ 7242], 00:23:02.854 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8717], 00:23:02.854 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[10421], 95.00th=[11076], 00:23:02.854 | 99.00th=[12518], 99.50th=[12911], 99.90th=[13829], 99.95th=[13960], 00:23:02.854 | 99.99th=[14222] 00:23:02.854 bw ( KiB/s): min=88384, max=100032, per=92.21%, avg=94920.00, stdev=5691.71, samples=4 00:23:02.854 iops : min= 5524, max= 6252, avg=5932.50, stdev=355.73, samples=4 00:23:02.854 lat (msec) : 2=0.04%, 4=1.87%, 10=90.55%, 20=7.54% 00:23:02.854 cpu : usr=85.20%, sys=13.60%, ctx=114, majf=0, minf=2 00:23:02.854 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:02.854 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:02.854 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:02.854 issued rwts: total=22021,11909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:02.854 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:02.854 00:23:02.854 Run status group 0 (all jobs): 00:23:02.854 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=344MiB (361MB), run=2008-2008msec 00:23:02.854 WRITE: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=186MiB (195MB), run=1851-1851msec 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:02.854 rmmod nvme_tcp 00:23:02.854 rmmod nvme_fabrics 00:23:02.854 rmmod nvme_keyring 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3494542 ']' 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3494542 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 3494542 ']' 00:23:02.854 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 3494542 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3494542 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3494542' 00:23:03.113 killing process with pid 3494542 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 3494542 00:23:03.113 18:17:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 3494542 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.113 18:17:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.646 18:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:05.646 00:23:05.646 real 0m15.448s 00:23:05.646 user 0m46.620s 00:23:05.646 sys 0m5.969s 00:23:05.646 18:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:05.646 18:17:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.647 ************************************ 00:23:05.647 END TEST nvmf_fio_host 00:23:05.647 ************************************ 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.647 ************************************ 00:23:05.647 START TEST nvmf_failover 00:23:05.647 ************************************ 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:05.647 * Looking for test storage... 00:23:05.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.647 18:17:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:10.972 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:10.973 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:10.973 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:10.973 Found net devices under 0000:86:00.0: cvl_0_0 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:10.973 Found net devices under 0000:86:00.1: cvl_0_1 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:10.973 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:10.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:10.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:23:10.974 00:23:10.974 --- 10.0.0.2 ping statistics --- 00:23:10.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.974 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:10.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:10.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:23:10.974 00:23:10.974 --- 10.0.0.1 ping statistics --- 00:23:10.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:10.974 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3499565 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3499565 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3499565 ']' 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:10.974 18:18:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:10.974 [2024-07-24 18:18:03.880083] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:23:10.974 [2024-07-24 18:18:03.880129] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:10.974 EAL: No free 2048 kB hugepages reported on node 1 00:23:10.974 [2024-07-24 18:18:03.938017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:10.974 [2024-07-24 18:18:04.019055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:10.974 [2024-07-24 18:18:04.019090] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:10.974 [2024-07-24 18:18:04.019097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:10.974 [2024-07-24 18:18:04.019103] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:10.974 [2024-07-24 18:18:04.019108] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:10.974 [2024-07-24 18:18:04.019143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:10.974 [2024-07-24 18:18:04.019163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:10.974 [2024-07-24 18:18:04.019164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:11.912 [2024-07-24 18:18:04.898352] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.912 18:18:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:12.172 Malloc0 00:23:12.172 18:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:12.431 18:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:12.431 18:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:12.691 [2024-07-24 18:18:05.645906] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:12.691 18:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:12.950 [2024-07-24 18:18:05.814330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:12.950 18:18:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:12.950 [2024-07-24 18:18:05.994932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3499855 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3499855 /var/tmp/bdevperf.sock 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3499855 ']' 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:12.950 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:12.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:12.951 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:12.951 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:13.887 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:13.887 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:13.887 18:18:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:14.145 NVMe0n1 00:23:14.145 18:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:14.403 00:23:14.403 18:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3500088 00:23:14.403 18:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:14.403 18:18:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:15.781 18:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.781 [2024-07-24 18:18:08.628895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1ef50 is same with the state(5) to be set 00:23:15.781 18:18:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:19.074 18:18:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.074 00:23:19.074 18:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:19.334 [2024-07-24 18:18:12.235146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 [2024-07-24 18:18:12.235184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 [2024-07-24 18:18:12.235191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 [2024-07-24 18:18:12.235197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 [2024-07-24 18:18:12.235203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 [2024-07-24 18:18:12.235209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 [2024-07-24 18:18:12.235215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 [2024-07-24 18:18:12.235220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a1fd70 is same with the state(5) to be set 00:23:19.334 18:18:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:22.627 18:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:22.627 [2024-07-24 18:18:15.432441] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.627 18:18:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:23.564 18:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:23.564 [2024-07-24 18:18:16.633740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.564 [2024-07-24 18:18:16.633820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd9b40 is same with the state(5) to be set 00:23:23.824 18:18:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3500088 00:23:30.397 0 00:23:30.397 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3499855 00:23:30.397 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3499855 ']' 00:23:30.397 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3499855 00:23:30.397 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3499855 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3499855' 00:23:30.398 killing process with pid 3499855 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3499855 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3499855 00:23:30.398 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:30.398 [2024-07-24 18:18:06.054004] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:23:30.398 [2024-07-24 18:18:06.054055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3499855 ] 00:23:30.398 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.398 [2024-07-24 18:18:06.108839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:30.398 [2024-07-24 18:18:06.183686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.398 Running I/O for 15 seconds... 00:23:30.398 [2024-07-24 18:18:08.629272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.398 [2024-07-24 18:18:08.629306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.398 [2024-07-24 18:18:08.629328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:98472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:98504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:98552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:98624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:98656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.398 [2024-07-24 18:18:08.629771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.398 [2024-07-24 18:18:08.629777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:98688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.629897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.629912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.629927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.629942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.629956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.629971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.629987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.629995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.399 [2024-07-24 18:18:08.630134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.399 [2024-07-24 18:18:08.630332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.399 [2024-07-24 18:18:08.630338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:98168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.400 [2024-07-24 18:18:08.630903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.400 [2024-07-24 18:18:08.630910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.630918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.630925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.630933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.630939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.630946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:98312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.630954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.630963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.630970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.630978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.630984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.630992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.630999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.401 [2024-07-24 18:18:08.631169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.401 [2024-07-24 18:18:08.631193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.401 [2024-07-24 18:18:08.631201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98440 len:8 PRP1 0x0 PRP2 0x0 00:23:30.401 [2024-07-24 18:18:08.631208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631248] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x13ff420 was disconnected and freed. reset controller. 00:23:30.401 [2024-07-24 18:18:08.631257] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:30.401 [2024-07-24 18:18:08.631277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.401 [2024-07-24 18:18:08.631284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.401 [2024-07-24 18:18:08.631297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.401 [2024-07-24 18:18:08.631310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.401 [2024-07-24 18:18:08.631323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:08.631330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.401 [2024-07-24 18:18:08.634109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.401 [2024-07-24 18:18:08.634137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140c540 (9): Bad file descriptor 00:23:30.401 [2024-07-24 18:18:08.662340] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.401 [2024-07-24 18:18:12.235310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.401 [2024-07-24 18:18:12.235515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.401 [2024-07-24 18:18:12.235523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:37672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:37704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:37712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.402 [2024-07-24 18:18:12.235934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.402 [2024-07-24 18:18:12.235947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.402 [2024-07-24 18:18:12.235955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.235961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.235969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.235975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.235983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:37752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.235989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.235997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:37760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:37768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:37792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:37872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:37880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:37944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:37976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.403 [2024-07-24 18:18:12.236508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.403 [2024-07-24 18:18:12.236514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.404 [2024-07-24 18:18:12.236852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.404 [2024-07-24 18:18:12.236865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.404 [2024-07-24 18:18:12.236879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.404 [2024-07-24 18:18:12.236893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.404 [2024-07-24 18:18:12.236906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.404 [2024-07-24 18:18:12.236920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.236992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.236999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.237005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.237013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.237019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.237028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.237034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.237041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.237047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.237055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.237061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.404 [2024-07-24 18:18:12.237069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.404 [2024-07-24 18:18:12.237075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.405 [2024-07-24 18:18:12.237089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.405 [2024-07-24 18:18:12.237102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.405 [2024-07-24 18:18:12.237116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.405 [2024-07-24 18:18:12.237130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.405 [2024-07-24 18:18:12.237154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.405 [2024-07-24 18:18:12.237161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38624 len:8 PRP1 0x0 PRP2 0x0 00:23:30.405 [2024-07-24 18:18:12.237170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237210] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x14303f0 was disconnected and freed. reset controller. 00:23:30.405 [2024-07-24 18:18:12.237218] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:30.405 [2024-07-24 18:18:12.237236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.405 [2024-07-24 18:18:12.237243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.405 [2024-07-24 18:18:12.237256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.405 [2024-07-24 18:18:12.237269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.405 [2024-07-24 18:18:12.237282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:12.237288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.405 [2024-07-24 18:18:12.240065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.405 [2024-07-24 18:18:12.240097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140c540 (9): Bad file descriptor 00:23:30.405 [2024-07-24 18:18:12.404282] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.405 [2024-07-24 18:18:16.633913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.633948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.633963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.633970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.633978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.633985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.633993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.405 [2024-07-24 18:18:16.634321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.405 [2024-07-24 18:18:16.634327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.406 [2024-07-24 18:18:16.634341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.406 [2024-07-24 18:18:16.634354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.406 [2024-07-24 18:18:16.634368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:90352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.406 [2024-07-24 18:18:16.634599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:90440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:90448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:90544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:90552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.406 [2024-07-24 18:18:16.634875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.406 [2024-07-24 18:18:16.634882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.634987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.634994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:90688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:90696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:90744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:90784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:90808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.407 [2024-07-24 18:18:16.635342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.407 [2024-07-24 18:18:16.635350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:90936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:91008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.408 [2024-07-24 18:18:16.635629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:30.408 [2024-07-24 18:18:16.635644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:91016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:91024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:91032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:91048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:91056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:91064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:30.408 [2024-07-24 18:18:16.635741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1430250 is same with the state(5) to be set 00:23:30.408 [2024-07-24 18:18:16.635756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:30.408 [2024-07-24 18:18:16.635763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:30.408 [2024-07-24 18:18:16.635771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:91072 len:8 PRP1 0x0 PRP2 0x0 00:23:30.408 [2024-07-24 18:18:16.635777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635817] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1430250 was disconnected and freed. reset controller. 00:23:30.408 [2024-07-24 18:18:16.635825] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:30.408 [2024-07-24 18:18:16.635846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.408 [2024-07-24 18:18:16.635853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.408 [2024-07-24 18:18:16.635867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.408 [2024-07-24 18:18:16.635881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:30.408 [2024-07-24 18:18:16.635894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:30.408 [2024-07-24 18:18:16.635900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.408 [2024-07-24 18:18:16.638673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:30.408 [2024-07-24 18:18:16.638701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x140c540 (9): Bad file descriptor 00:23:30.408 [2024-07-24 18:18:16.672136] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:30.408 00:23:30.408 Latency(us) 00:23:30.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.408 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:30.408 Verification LBA range: start 0x0 length 0x4000 00:23:30.408 NVMe0n1 : 15.01 11153.58 43.57 705.10 0.00 10772.33 413.50 15915.89 00:23:30.408 =================================================================================================================== 00:23:30.408 Total : 11153.58 43.57 705.10 0.00 10772.33 413.50 15915.89 00:23:30.408 Received shutdown signal, test time was about 15.000000 seconds 00:23:30.408 00:23:30.408 Latency(us) 00:23:30.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:30.409 =================================================================================================================== 00:23:30.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3502994 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3502994 /var/tmp/bdevperf.sock 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 3502994 ']' 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:30.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:30.409 18:18:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.674 18:18:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:30.674 18:18:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:23:30.674 18:18:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:30.934 [2024-07-24 18:18:23.847253] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:30.934 18:18:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:31.192 [2024-07-24 18:18:24.031738] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:31.192 18:18:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.452 NVMe0n1 00:23:31.452 18:18:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:31.712 00:23:31.712 18:18:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:32.279 00:23:32.279 18:18:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:32.279 18:18:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:32.279 18:18:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:32.538 18:18:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:35.824 18:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.824 18:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:35.824 18:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.824 18:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3503924 00:23:35.824 18:18:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3503924 00:23:36.758 0 00:23:36.758 18:18:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:36.758 [2024-07-24 18:18:22.887294] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:23:36.758 [2024-07-24 18:18:22.887345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3502994 ] 00:23:36.758 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.758 [2024-07-24 18:18:22.942442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.758 [2024-07-24 18:18:23.010997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.758 [2024-07-24 18:18:25.450589] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:36.758 [2024-07-24 18:18:25.450634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.758 [2024-07-24 18:18:25.450646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.758 [2024-07-24 18:18:25.450654] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.758 [2024-07-24 18:18:25.450661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.758 [2024-07-24 18:18:25.450668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.758 [2024-07-24 18:18:25.450676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.758 [2024-07-24 18:18:25.450682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:36.758 [2024-07-24 18:18:25.450689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:36.758 [2024-07-24 18:18:25.450695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:36.758 [2024-07-24 18:18:25.450718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:36.758 [2024-07-24 18:18:25.450732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b87540 (9): Bad file descriptor 00:23:36.758 [2024-07-24 18:18:25.470966] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:36.758 Running I/O for 1 seconds... 00:23:36.758 00:23:36.758 Latency(us) 00:23:36.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.758 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:36.758 Verification LBA range: start 0x0 length 0x4000 00:23:36.758 NVMe0n1 : 1.05 10852.96 42.39 0.00 0.00 11307.86 2075.31 42192.70 00:23:36.758 =================================================================================================================== 00:23:36.758 Total : 10852.96 42.39 0.00 0.00 11307.86 2075.31 42192.70 00:23:36.758 18:18:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:36.758 18:18:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:37.016 18:18:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.274 18:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:37.274 18:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:37.274 18:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:37.534 18:18:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3502994 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3502994 ']' 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3502994 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3502994 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3502994' 00:23:40.862 killing process with pid 3502994 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3502994 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3502994 00:23:40.862 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:41.146 18:18:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.146 rmmod nvme_tcp 00:23:41.146 rmmod nvme_fabrics 00:23:41.146 rmmod nvme_keyring 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3499565 ']' 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3499565 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 3499565 ']' 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 3499565 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:41.146 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3499565 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3499565' 00:23:41.404 killing process with pid 3499565 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 3499565 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 3499565 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.404 18:18:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.934 00:23:43.934 real 0m38.189s 00:23:43.934 user 2m3.444s 00:23:43.934 sys 0m7.272s 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:43.934 ************************************ 00:23:43.934 END TEST nvmf_failover 00:23:43.934 ************************************ 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.934 ************************************ 00:23:43.934 START TEST nvmf_host_discovery 00:23:43.934 ************************************ 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:43.934 * Looking for test storage... 00:23:43.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.934 18:18:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:49.207 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:49.207 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:49.207 Found net devices under 0000:86:00.0: cvl_0_0 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:49.207 Found net devices under 0000:86:00.1: cvl_0_1 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.207 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.208 18:18:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:49.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:23:49.208 00:23:49.208 --- 10.0.0.2 ping statistics --- 00:23:49.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.208 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:23:49.208 00:23:49.208 --- 10.0.0.1 ping statistics --- 00:23:49.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.208 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3508150 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3508150 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3508150 ']' 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:49.208 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:49.208 [2024-07-24 18:18:42.096275] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:23:49.208 [2024-07-24 18:18:42.096317] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.208 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.208 [2024-07-24 18:18:42.152317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.208 [2024-07-24 18:18:42.229879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.208 [2024-07-24 18:18:42.229914] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.208 [2024-07-24 18:18:42.229922] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.208 [2024-07-24 18:18:42.229928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.208 [2024-07-24 18:18:42.229933] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.208 [2024-07-24 18:18:42.229952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.141 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:50.141 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:50.141 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 [2024-07-24 18:18:42.931934] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 [2024-07-24 18:18:42.944075] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 null0 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 null1 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3508393 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3508393 /tmp/host.sock 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 3508393 ']' 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:50.142 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:50.142 18:18:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:50.142 [2024-07-24 18:18:43.016532] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:23:50.142 [2024-07-24 18:18:43.016570] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508393 ] 00:23:50.142 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.142 [2024-07-24 18:18:43.070238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.142 [2024-07-24 18:18:43.142514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.073 18:18:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.073 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.074 [2024-07-24 18:18:44.151264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:51.074 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.331 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:51.331 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:23:51.332 18:18:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:51.897 [2024-07-24 18:18:44.831002] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:51.897 [2024-07-24 18:18:44.831021] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:51.897 [2024-07-24 18:18:44.831032] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:51.897 [2024-07-24 18:18:44.918287] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:52.154 [2024-07-24 18:18:44.983172] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:52.154 [2024-07-24 18:18:44.983191] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:52.411 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.669 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.926 [2024-07-24 18:18:45.867995] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:52.926 [2024-07-24 18:18:45.868435] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:52.926 [2024-07-24 18:18:45.868455] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:52.926 [2024-07-24 18:18:45.956031] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:52.926 18:18:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.183 18:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:53.183 18:18:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:23:53.183 [2024-07-24 18:18:46.058680] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:53.183 [2024-07-24 18:18:46.058696] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:53.183 [2024-07-24 18:18:46.058701] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 [2024-07-24 18:18:47.123762] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:54.112 [2024-07-24 18:18:47.123782] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:54.112 [2024-07-24 18:18:47.126479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 18:18:47.126497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 18:18:47.126510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 18:18:47.126516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 18:18:47.126523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 18:18:47.126529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 18:18:47.126535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.112 [2024-07-24 18:18:47.126541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.112 [2024-07-24 18:18:47.126548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.112 [2024-07-24 18:18:47.136494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 18:18:47.146529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 18:18:47.146749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 18:18:47.146763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe80f30 with addr=10.0.0.2, port=4420 00:23:54.112 [2024-07-24 18:18:47.146770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 18:18:47.146781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 18:18:47.146790] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.112 [2024-07-24 18:18:47.146796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.112 [2024-07-24 18:18:47.146803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.112 [2024-07-24 18:18:47.146813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.112 [2024-07-24 18:18:47.156581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 18:18:47.156852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 18:18:47.156863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe80f30 with addr=10.0.0.2, port=4420 00:23:54.112 [2024-07-24 18:18:47.156869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 18:18:47.156879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 18:18:47.156893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.112 [2024-07-24 18:18:47.156899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.112 [2024-07-24 18:18:47.156905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.112 [2024-07-24 18:18:47.156913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.112 [2024-07-24 18:18:47.166630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 18:18:47.166893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 18:18:47.166904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe80f30 with addr=10.0.0.2, port=4420 00:23:54.112 [2024-07-24 18:18:47.166910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 18:18:47.166919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 18:18:47.166958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.112 [2024-07-24 18:18:47.166966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.112 [2024-07-24 18:18:47.166972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.112 [2024-07-24 18:18:47.166981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.112 [2024-07-24 18:18:47.176678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.112 [2024-07-24 18:18:47.176939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.112 [2024-07-24 18:18:47.176951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe80f30 with addr=10.0.0.2, port=4420 00:23:54.112 [2024-07-24 18:18:47.176958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.112 [2024-07-24 18:18:47.176967] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.112 [2024-07-24 18:18:47.176982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.112 [2024-07-24 18:18:47.176988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.112 [2024-07-24 18:18:47.176994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.112 [2024-07-24 18:18:47.177002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.112 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.113 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.113 [2024-07-24 18:18:47.186729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.113 [2024-07-24 18:18:47.186976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.113 [2024-07-24 18:18:47.186988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe80f30 with addr=10.0.0.2, port=4420 00:23:54.113 [2024-07-24 18:18:47.186997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.113 [2024-07-24 18:18:47.187008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.113 [2024-07-24 18:18:47.187030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.113 [2024-07-24 18:18:47.187037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.113 [2024-07-24 18:18:47.187043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.113 [2024-07-24 18:18:47.187053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.380 [2024-07-24 18:18:47.196782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.380 [2024-07-24 18:18:47.196967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.380 [2024-07-24 18:18:47.196979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe80f30 with addr=10.0.0.2, port=4420 00:23:54.380 [2024-07-24 18:18:47.196985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.380 [2024-07-24 18:18:47.196995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.380 [2024-07-24 18:18:47.197004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.380 [2024-07-24 18:18:47.197009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.380 [2024-07-24 18:18:47.197016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.380 [2024-07-24 18:18:47.197024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.380 [2024-07-24 18:18:47.206833] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:54.380 [2024-07-24 18:18:47.207091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:54.380 [2024-07-24 18:18:47.207102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe80f30 with addr=10.0.0.2, port=4420 00:23:54.380 [2024-07-24 18:18:47.207109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe80f30 is same with the state(5) to be set 00:23:54.380 [2024-07-24 18:18:47.207122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe80f30 (9): Bad file descriptor 00:23:54.380 [2024-07-24 18:18:47.207135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:54.380 [2024-07-24 18:18:47.207142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:54.380 [2024-07-24 18:18:47.207148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:54.380 [2024-07-24 18:18:47.207156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.380 [2024-07-24 18:18:47.210734] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:54.380 [2024-07-24 18:18:47.210748] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.380 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.381 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:54.637 18:18:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.566 [2024-07-24 18:18:48.510375] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:55.566 [2024-07-24 18:18:48.510391] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:55.566 [2024-07-24 18:18:48.510402] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:55.566 [2024-07-24 18:18:48.597681] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:55.823 [2024-07-24 18:18:48.860329] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:55.823 [2024-07-24 18:18:48.860355] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.823 request: 00:23:55.823 { 00:23:55.823 "name": "nvme", 00:23:55.823 "trtype": "tcp", 00:23:55.823 "traddr": "10.0.0.2", 00:23:55.823 "adrfam": "ipv4", 00:23:55.823 "trsvcid": "8009", 00:23:55.823 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:55.823 "wait_for_attach": true, 00:23:55.823 "method": "bdev_nvme_start_discovery", 00:23:55.823 "req_id": 1 00:23:55.823 } 00:23:55.823 Got JSON-RPC error response 00:23:55.823 response: 00:23:55.823 { 00:23:55.823 "code": -17, 00:23:55.823 "message": "File exists" 00:23:55.823 } 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:55.823 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.080 18:18:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.080 request: 00:23:56.080 { 00:23:56.080 "name": "nvme_second", 00:23:56.080 "trtype": "tcp", 00:23:56.080 "traddr": "10.0.0.2", 00:23:56.080 "adrfam": "ipv4", 00:23:56.080 "trsvcid": "8009", 00:23:56.080 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:56.080 "wait_for_attach": true, 00:23:56.080 "method": "bdev_nvme_start_discovery", 00:23:56.080 "req_id": 1 00:23:56.080 } 00:23:56.080 Got JSON-RPC error response 00:23:56.080 response: 00:23:56.080 { 00:23:56.080 "code": -17, 00:23:56.080 "message": "File exists" 00:23:56.080 } 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:56.080 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:56.081 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:56.081 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.081 18:18:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.451 [2024-07-24 18:18:50.117742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.451 [2024-07-24 18:18:50.117779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb22a0 with addr=10.0.0.2, port=8010 00:23:57.451 [2024-07-24 18:18:50.117795] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:57.451 [2024-07-24 18:18:50.117801] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:57.451 [2024-07-24 18:18:50.117807] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:58.380 [2024-07-24 18:18:51.120218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:58.381 [2024-07-24 18:18:51.120244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xeb22a0 with addr=10.0.0.2, port=8010 00:23:58.381 [2024-07-24 18:18:51.120255] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:58.381 [2024-07-24 18:18:51.120261] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:58.381 [2024-07-24 18:18:51.120266] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:23:59.310 [2024-07-24 18:18:52.122388] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:23:59.310 request: 00:23:59.310 { 00:23:59.310 "name": "nvme_second", 00:23:59.310 "trtype": "tcp", 00:23:59.310 "traddr": "10.0.0.2", 00:23:59.310 "adrfam": "ipv4", 00:23:59.310 "trsvcid": "8010", 00:23:59.310 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:59.310 "wait_for_attach": false, 00:23:59.310 "attach_timeout_ms": 3000, 00:23:59.310 "method": "bdev_nvme_start_discovery", 00:23:59.310 "req_id": 1 00:23:59.310 } 00:23:59.310 Got JSON-RPC error response 00:23:59.310 response: 00:23:59.310 { 00:23:59.310 "code": -110, 00:23:59.310 "message": "Connection timed out" 00:23:59.310 } 00:23:59.310 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:59.310 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:23:59.310 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:59.310 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:59.310 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:59.310 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:23:59.310 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3508393 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:59.311 rmmod nvme_tcp 00:23:59.311 rmmod nvme_fabrics 00:23:59.311 rmmod nvme_keyring 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3508150 ']' 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3508150 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 3508150 ']' 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 3508150 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3508150 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3508150' 00:23:59.311 killing process with pid 3508150 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 3508150 00:23:59.311 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 3508150 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:59.568 18:18:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.467 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:01.467 00:24:01.467 real 0m17.960s 00:24:01.467 user 0m22.848s 00:24:01.467 sys 0m5.385s 00:24:01.467 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:01.467 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:01.467 ************************************ 00:24:01.467 END TEST nvmf_host_discovery 00:24:01.467 ************************************ 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.725 ************************************ 00:24:01.725 START TEST nvmf_host_multipath_status 00:24:01.725 ************************************ 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.725 * Looking for test storage... 00:24:01.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:01.725 18:18:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:06.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:06.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:06.988 Found net devices under 0000:86:00.0: cvl_0_0 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:06.988 Found net devices under 0000:86:00.1: cvl_0_1 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.988 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:06.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:24:06.989 00:24:06.989 --- 10.0.0.2 ping statistics --- 00:24:06.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.989 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:24:06.989 00:24:06.989 --- 10.0.0.1 ping statistics --- 00:24:06.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.989 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3513463 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3513463 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3513463 ']' 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:06.989 18:18:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:06.989 [2024-07-24 18:18:59.974763] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:24:06.989 [2024-07-24 18:18:59.974803] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.989 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.989 [2024-07-24 18:19:00.032744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:07.244 [2024-07-24 18:19:00.115143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.244 [2024-07-24 18:19:00.115181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.244 [2024-07-24 18:19:00.115188] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.244 [2024-07-24 18:19:00.115193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.244 [2024-07-24 18:19:00.115198] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.244 [2024-07-24 18:19:00.115268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.244 [2024-07-24 18:19:00.115271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3513463 00:24:07.807 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:08.064 [2024-07-24 18:19:00.962302] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.064 18:19:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:08.321 Malloc0 00:24:08.321 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:08.321 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.578 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.578 [2024-07-24 18:19:01.638531] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.578 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.835 [2024-07-24 18:19:01.802912] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3513724 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3513724 /var/tmp/bdevperf.sock 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 3513724 ']' 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.835 18:19:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.093 18:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.094 18:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:24:09.094 18:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:09.351 18:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:09.609 Nvme0n1 00:24:09.609 18:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:09.867 Nvme0n1 00:24:09.867 18:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:09.867 18:19:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:12.482 18:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:12.482 18:19:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:12.482 18:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.482 18:19:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:13.415 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:13.415 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.415 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.415 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.674 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.932 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.932 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.932 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.932 18:19:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:14.190 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.190 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.190 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.190 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.448 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.448 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.448 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.448 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.448 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.448 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:14.448 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.706 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.964 18:19:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:15.898 18:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:15.898 18:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:15.898 18:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:15.898 18:19:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.156 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.414 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.414 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.414 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.414 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.672 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.672 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.672 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.672 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:16.931 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.931 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:16.931 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.931 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:16.931 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.931 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:16.931 18:19:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.189 18:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:17.447 18:19:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:18.380 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:18.380 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.380 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.380 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.638 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.638 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:18.638 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:18.638 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.896 18:19:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.153 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.153 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.153 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.153 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.411 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.411 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.411 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.411 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:19.411 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.411 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:19.411 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:19.669 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:19.927 18:19:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:20.866 18:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:20.866 18:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:20.866 18:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.866 18:19:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.124 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.124 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.124 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.124 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.382 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:21.640 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.640 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:21.640 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:21.641 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.898 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.898 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:21.898 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.898 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:21.898 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.899 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:21.899 18:19:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:22.157 18:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:22.415 18:19:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:23.349 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:23.349 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:23.349 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.349 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:23.607 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.607 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:23.607 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.607 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.865 18:19:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.123 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.124 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:24.124 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.124 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:24.124 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.381 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:24.381 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.381 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:24.381 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.381 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:24.381 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:24.639 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:24.897 18:19:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:25.830 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:25.830 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:25.830 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.830 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.088 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.088 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:26.088 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.088 18:19:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.088 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.088 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.088 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.088 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:26.346 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.346 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:26.346 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.346 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.604 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:26.863 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.863 18:19:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:27.121 18:19:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:27.121 18:19:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:27.378 18:19:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:27.378 18:19:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:28.753 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.011 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.011 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.011 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.011 18:19:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:29.269 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.269 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:29.269 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.269 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:29.527 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.527 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:29.527 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:29.527 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.527 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.527 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:29.527 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:29.785 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.043 18:19:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:30.977 18:19:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:30.977 18:19:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:30.977 18:19:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.977 18:19:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:31.235 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:31.235 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:31.235 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.235 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:31.235 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.235 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:31.493 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.493 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:31.493 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.493 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:31.493 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.493 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:31.751 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:31.751 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:31.751 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:31.751 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:32.009 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.010 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:32.010 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.010 18:19:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:32.010 18:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.010 18:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:32.010 18:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:32.268 18:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:32.526 18:19:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:33.459 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:33.459 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:33.459 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.459 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.748 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:34.029 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.029 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:34.029 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.029 18:19:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:34.029 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.030 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:34.030 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.030 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:34.287 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.287 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:34.287 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.288 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:34.546 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.546 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:34.546 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:34.804 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:34.804 18:19:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:36.180 18:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:36.181 18:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:36.181 18:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.181 18:19:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.181 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:36.439 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.439 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:36.439 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.439 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:36.696 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.696 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:36.696 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.696 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:36.954 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.954 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:36.954 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.954 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:36.954 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:36.954 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3513724 00:24:36.954 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3513724 ']' 00:24:36.955 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3513724 00:24:36.955 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:36.955 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:36.955 18:19:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3513724 00:24:36.955 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:24:36.955 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:24:36.955 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3513724' 00:24:36.955 killing process with pid 3513724 00:24:36.955 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3513724 00:24:36.955 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3513724 00:24:37.215 Connection closed with partial response: 00:24:37.215 00:24:37.215 00:24:37.215 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3513724 00:24:37.215 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.215 [2024-07-24 18:19:01.848218] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:24:37.215 [2024-07-24 18:19:01.848270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3513724 ] 00:24:37.215 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.215 [2024-07-24 18:19:01.897818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.215 [2024-07-24 18:19:01.971407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.215 Running I/O for 90 seconds... 00:24:37.215 [2024-07-24 18:19:15.138955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:60488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.138992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:60504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:60512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.215 [2024-07-24 18:19:15.140295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:37.215 [2024-07-24 18:19:15.140309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:60600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:60608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:60624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:60632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:60640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:60648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:60664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:60672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:60688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:60696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.140979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.140993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.141001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.141015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.141021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.141035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.141042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.141055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.141062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.141075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.216 [2024-07-24 18:19:15.141082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.216 [2024-07-24 18:19:15.141096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.217 [2024-07-24 18:19:15.141327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.141976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.217 [2024-07-24 18:19:15.141983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.142000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.217 [2024-07-24 18:19:15.142007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.142024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.217 [2024-07-24 18:19:15.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.142048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.217 [2024-07-24 18:19:15.142055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.142072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.217 [2024-07-24 18:19:15.142079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:37.217 [2024-07-24 18:19:15.142096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.217 [2024-07-24 18:19:15.142102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:15.142336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:15.142343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.847958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:107744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.847999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.218 [2024-07-24 18:19:27.848193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:107848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.848915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:107896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.848922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:108024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:108040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:108056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:37.218 [2024-07-24 18:19:27.850353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:108072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.218 [2024-07-24 18:19:27.850360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:108088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:108120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:108136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:108152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:108168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:108184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:108200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:108216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:108232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:108248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:108264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:108280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:108312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:108328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:37.219 [2024-07-24 18:19:27.850682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:37.219 [2024-07-24 18:19:27.850689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:37.219 Received shutdown signal, test time was about 26.931709 seconds 00:24:37.219 00:24:37.219 Latency(us) 00:24:37.219 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.219 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:37.219 Verification LBA range: start 0x0 length 0x4000 00:24:37.219 Nvme0n1 : 26.93 10521.84 41.10 0.00 0.00 12144.73 807.50 3019898.88 00:24:37.219 =================================================================================================================== 00:24:37.219 Total : 10521.84 41.10 0.00 0.00 12144.73 807.50 3019898.88 00:24:37.219 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.477 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:37.477 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:37.478 rmmod nvme_tcp 00:24:37.478 rmmod nvme_fabrics 00:24:37.478 rmmod nvme_keyring 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3513463 ']' 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3513463 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 3513463 ']' 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 3513463 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3513463 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3513463' 00:24:37.478 killing process with pid 3513463 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 3513463 00:24:37.478 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 3513463 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.736 18:19:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:40.267 00:24:40.267 real 0m38.187s 00:24:40.267 user 1m43.376s 00:24:40.267 sys 0m10.209s 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:40.267 ************************************ 00:24:40.267 END TEST nvmf_host_multipath_status 00:24:40.267 ************************************ 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.267 ************************************ 00:24:40.267 START TEST nvmf_discovery_remove_ifc 00:24:40.267 ************************************ 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:40.267 * Looking for test storage... 00:24:40.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:40.267 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:40.268 18:19:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.530 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:45.531 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:45.531 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:45.531 Found net devices under 0000:86:00.0: cvl_0_0 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:45.531 Found net devices under 0000:86:00.1: cvl_0_1 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:24:45.531 00:24:45.531 --- 10.0.0.2 ping statistics --- 00:24:45.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.531 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:45.531 00:24:45.531 --- 10.0.0.1 ping statistics --- 00:24:45.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.531 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3522024 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3522024 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3522024 ']' 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:45.531 18:19:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:45.531 [2024-07-24 18:19:38.602700] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:24:45.531 [2024-07-24 18:19:38.602741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.790 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.790 [2024-07-24 18:19:38.660988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.790 [2024-07-24 18:19:38.738696] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.790 [2024-07-24 18:19:38.738729] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.790 [2024-07-24 18:19:38.738738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.790 [2024-07-24 18:19:38.738744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.790 [2024-07-24 18:19:38.738748] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.790 [2024-07-24 18:19:38.738766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:46.356 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.614 [2024-07-24 18:19:39.440880] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.614 [2024-07-24 18:19:39.449004] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:46.614 null0 00:24:46.614 [2024-07-24 18:19:39.481023] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3522197 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3522197 /tmp/host.sock 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 3522197 ']' 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:46.614 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:46.614 18:19:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:46.614 [2024-07-24 18:19:39.547460] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:24:46.614 [2024-07-24 18:19:39.547506] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3522197 ] 00:24:46.614 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.614 [2024-07-24 18:19:39.601895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.614 [2024-07-24 18:19:39.681411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.548 18:19:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.482 [2024-07-24 18:19:41.488989] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:48.482 [2024-07-24 18:19:41.489008] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:48.482 [2024-07-24 18:19:41.489019] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:48.740 [2024-07-24 18:19:41.615411] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:48.740 [2024-07-24 18:19:41.792652] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:48.740 [2024-07-24 18:19:41.792693] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:48.740 [2024-07-24 18:19:41.792712] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:48.740 [2024-07-24 18:19:41.792725] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:48.740 [2024-07-24 18:19:41.792741] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.740 [2024-07-24 18:19:41.798335] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe41e60 was disconnected and freed. delete nvme_qpair. 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.740 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:48.998 18:19:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:49.932 18:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:49.932 18:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:49.932 18:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:49.932 18:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:49.932 18:19:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:49.932 18:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.932 18:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:50.190 18:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:50.190 18:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:50.190 18:19:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:51.126 18:19:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.061 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.319 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:52.319 18:19:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.255 18:19:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.191 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:54.192 [2024-07-24 18:19:47.234076] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:54.192 [2024-07-24 18:19:47.234110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.192 [2024-07-24 18:19:47.234120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.192 [2024-07-24 18:19:47.234128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.192 [2024-07-24 18:19:47.234135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.192 [2024-07-24 18:19:47.234142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.192 [2024-07-24 18:19:47.234148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.192 [2024-07-24 18:19:47.234155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.192 [2024-07-24 18:19:47.234161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.192 [2024-07-24 18:19:47.234173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:54.192 [2024-07-24 18:19:47.234180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:54.192 [2024-07-24 18:19:47.234187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe086b0 is same with the state(5) to be set 00:24:54.192 [2024-07-24 18:19:47.244100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe086b0 (9): Bad file descriptor 00:24:54.192 [2024-07-24 18:19:47.254139] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:54.192 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.192 18:19:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.569 [2024-07-24 18:19:48.262530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.569 [2024-07-24 18:19:48.262581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe086b0 with addr=10.0.0.2, port=4420 00:24:55.569 [2024-07-24 18:19:48.262600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe086b0 is same with the state(5) to be set 00:24:55.569 [2024-07-24 18:19:48.262632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe086b0 (9): Bad file descriptor 00:24:55.569 [2024-07-24 18:19:48.262691] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:55.569 [2024-07-24 18:19:48.262718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:55.569 [2024-07-24 18:19:48.262728] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:55.569 [2024-07-24 18:19:48.262743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:55.569 [2024-07-24 18:19:48.262763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:55.569 [2024-07-24 18:19:48.262775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.569 18:19:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.505 [2024-07-24 18:19:49.265260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:56.505 [2024-07-24 18:19:49.265285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:56.505 [2024-07-24 18:19:49.265292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:56.505 [2024-07-24 18:19:49.265299] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:56.505 [2024-07-24 18:19:49.265310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.505 [2024-07-24 18:19:49.265330] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:56.505 [2024-07-24 18:19:49.265352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.505 [2024-07-24 18:19:49.265361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.505 [2024-07-24 18:19:49.265369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.505 [2024-07-24 18:19:49.265375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.505 [2024-07-24 18:19:49.265382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.505 [2024-07-24 18:19:49.265388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.505 [2024-07-24 18:19:49.265395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.505 [2024-07-24 18:19:49.265401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.505 [2024-07-24 18:19:49.265408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.505 [2024-07-24 18:19:49.265413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.505 [2024-07-24 18:19:49.265419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:56.505 [2024-07-24 18:19:49.265430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe07a80 (9): Bad file descriptor 00:24:56.505 [2024-07-24 18:19:49.266322] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:56.505 [2024-07-24 18:19:49.266332] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:56.505 18:19:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.442 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.701 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:57.701 18:19:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:58.268 [2024-07-24 18:19:51.283897] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:58.268 [2024-07-24 18:19:51.283915] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:58.268 [2024-07-24 18:19:51.283927] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:58.526 [2024-07-24 18:19:51.412324] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:58.526 [2024-07-24 18:19:51.515543] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:58.526 [2024-07-24 18:19:51.515576] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:58.526 [2024-07-24 18:19:51.515592] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:58.526 [2024-07-24 18:19:51.515604] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:58.526 [2024-07-24 18:19:51.515611] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:58.526 [2024-07-24 18:19:51.523131] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xe0f180 was disconnected and freed. delete nvme_qpair. 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:58.526 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:24:58.527 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3522197 00:24:58.527 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3522197 ']' 00:24:58.527 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3522197 00:24:58.527 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:58.527 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:58.527 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3522197 00:24:58.784 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:58.784 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:58.784 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3522197' 00:24:58.785 killing process with pid 3522197 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3522197 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3522197 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.785 rmmod nvme_tcp 00:24:58.785 rmmod nvme_fabrics 00:24:58.785 rmmod nvme_keyring 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3522024 ']' 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3522024 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 3522024 ']' 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 3522024 00:24:58.785 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:24:59.042 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:59.042 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3522024 00:24:59.042 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:59.042 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:59.042 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3522024' 00:24:59.042 killing process with pid 3522024 00:24:59.042 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 3522024 00:24:59.042 18:19:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 3522024 00:24:59.042 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:59.043 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:59.043 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:59.043 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:59.043 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:59.043 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:59.043 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:59.043 18:19:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:01.609 00:25:01.609 real 0m21.297s 00:25:01.609 user 0m26.779s 00:25:01.609 sys 0m5.515s 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.609 ************************************ 00:25:01.609 END TEST nvmf_discovery_remove_ifc 00:25:01.609 ************************************ 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.609 ************************************ 00:25:01.609 START TEST nvmf_identify_kernel_target 00:25:01.609 ************************************ 00:25:01.609 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:01.609 * Looking for test storage... 00:25:01.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:01.610 18:19:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:06.878 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:06.879 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:06.879 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:06.879 Found net devices under 0000:86:00.0: cvl_0_0 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:06.879 Found net devices under 0000:86:00.1: cvl_0_1 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:25:06.879 00:25:06.879 --- 10.0.0.2 ping statistics --- 00:25:06.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.879 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:25:06.879 00:25:06.879 --- 10.0.0.1 ping statistics --- 00:25:06.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.879 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:06.879 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:06.880 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.880 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:06.880 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:06.880 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.880 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:06.880 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:07.138 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:07.139 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:07.139 18:19:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:07.139 18:20:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:07.139 18:20:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:09.669 Waiting for block devices as requested 00:25:09.669 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:25:09.669 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.669 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.927 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:09.927 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.927 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.927 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:10.187 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:10.187 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:10.187 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:10.447 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:10.447 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:10.447 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:10.447 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:10.708 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:10.708 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:10.708 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:10.966 No valid GPT data, bailing 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:10.966 18:20:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:10.966 00:25:10.966 Discovery Log Number of Records 2, Generation counter 2 00:25:10.966 =====Discovery Log Entry 0====== 00:25:10.966 trtype: tcp 00:25:10.966 adrfam: ipv4 00:25:10.966 subtype: current discovery subsystem 00:25:10.966 treq: not specified, sq flow control disable supported 00:25:10.966 portid: 1 00:25:10.966 trsvcid: 4420 00:25:10.966 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:10.966 traddr: 10.0.0.1 00:25:10.966 eflags: none 00:25:10.966 sectype: none 00:25:10.966 =====Discovery Log Entry 1====== 00:25:10.966 trtype: tcp 00:25:10.966 adrfam: ipv4 00:25:10.966 subtype: nvme subsystem 00:25:10.966 treq: not specified, sq flow control disable supported 00:25:10.966 portid: 1 00:25:10.966 trsvcid: 4420 00:25:10.966 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:10.966 traddr: 10.0.0.1 00:25:10.966 eflags: none 00:25:10.966 sectype: none 00:25:10.966 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:10.966 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:10.966 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.260 ===================================================== 00:25:11.260 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:11.260 ===================================================== 00:25:11.260 Controller Capabilities/Features 00:25:11.260 ================================ 00:25:11.260 Vendor ID: 0000 00:25:11.260 Subsystem Vendor ID: 0000 00:25:11.260 Serial Number: 9389c4e29088ef054e59 00:25:11.260 Model Number: Linux 00:25:11.260 Firmware Version: 6.7.0-68 00:25:11.260 Recommended Arb Burst: 0 00:25:11.260 IEEE OUI Identifier: 00 00 00 00:25:11.260 Multi-path I/O 00:25:11.260 May have multiple subsystem ports: No 00:25:11.260 May have multiple controllers: No 00:25:11.260 Associated with SR-IOV VF: No 00:25:11.260 Max Data Transfer Size: Unlimited 00:25:11.260 Max Number of Namespaces: 0 00:25:11.260 Max Number of I/O Queues: 1024 00:25:11.260 NVMe Specification Version (VS): 1.3 00:25:11.260 NVMe Specification Version (Identify): 1.3 00:25:11.260 Maximum Queue Entries: 1024 00:25:11.260 Contiguous Queues Required: No 00:25:11.260 Arbitration Mechanisms Supported 00:25:11.260 Weighted Round Robin: Not Supported 00:25:11.260 Vendor Specific: Not Supported 00:25:11.260 Reset Timeout: 7500 ms 00:25:11.260 Doorbell Stride: 4 bytes 00:25:11.260 NVM Subsystem Reset: Not Supported 00:25:11.260 Command Sets Supported 00:25:11.260 NVM Command Set: Supported 00:25:11.260 Boot Partition: Not Supported 00:25:11.260 Memory Page Size Minimum: 4096 bytes 00:25:11.260 Memory Page Size Maximum: 4096 bytes 00:25:11.260 Persistent Memory Region: Not Supported 00:25:11.260 Optional Asynchronous Events Supported 00:25:11.260 Namespace Attribute Notices: Not Supported 00:25:11.260 Firmware Activation Notices: Not Supported 00:25:11.260 ANA Change Notices: Not Supported 00:25:11.260 PLE Aggregate Log Change Notices: Not Supported 00:25:11.260 LBA Status Info Alert Notices: Not Supported 00:25:11.260 EGE Aggregate Log Change Notices: Not Supported 00:25:11.260 Normal NVM Subsystem Shutdown event: Not Supported 00:25:11.260 Zone Descriptor Change Notices: Not Supported 00:25:11.260 Discovery Log Change Notices: Supported 00:25:11.260 Controller Attributes 00:25:11.260 128-bit Host Identifier: Not Supported 00:25:11.260 Non-Operational Permissive Mode: Not Supported 00:25:11.260 NVM Sets: Not Supported 00:25:11.260 Read Recovery Levels: Not Supported 00:25:11.260 Endurance Groups: Not Supported 00:25:11.260 Predictable Latency Mode: Not Supported 00:25:11.260 Traffic Based Keep ALive: Not Supported 00:25:11.260 Namespace Granularity: Not Supported 00:25:11.260 SQ Associations: Not Supported 00:25:11.260 UUID List: Not Supported 00:25:11.260 Multi-Domain Subsystem: Not Supported 00:25:11.260 Fixed Capacity Management: Not Supported 00:25:11.260 Variable Capacity Management: Not Supported 00:25:11.260 Delete Endurance Group: Not Supported 00:25:11.260 Delete NVM Set: Not Supported 00:25:11.260 Extended LBA Formats Supported: Not Supported 00:25:11.260 Flexible Data Placement Supported: Not Supported 00:25:11.260 00:25:11.260 Controller Memory Buffer Support 00:25:11.260 ================================ 00:25:11.260 Supported: No 00:25:11.260 00:25:11.260 Persistent Memory Region Support 00:25:11.260 ================================ 00:25:11.260 Supported: No 00:25:11.260 00:25:11.260 Admin Command Set Attributes 00:25:11.260 ============================ 00:25:11.260 Security Send/Receive: Not Supported 00:25:11.260 Format NVM: Not Supported 00:25:11.260 Firmware Activate/Download: Not Supported 00:25:11.260 Namespace Management: Not Supported 00:25:11.261 Device Self-Test: Not Supported 00:25:11.261 Directives: Not Supported 00:25:11.261 NVMe-MI: Not Supported 00:25:11.261 Virtualization Management: Not Supported 00:25:11.261 Doorbell Buffer Config: Not Supported 00:25:11.261 Get LBA Status Capability: Not Supported 00:25:11.261 Command & Feature Lockdown Capability: Not Supported 00:25:11.261 Abort Command Limit: 1 00:25:11.261 Async Event Request Limit: 1 00:25:11.261 Number of Firmware Slots: N/A 00:25:11.261 Firmware Slot 1 Read-Only: N/A 00:25:11.261 Firmware Activation Without Reset: N/A 00:25:11.261 Multiple Update Detection Support: N/A 00:25:11.261 Firmware Update Granularity: No Information Provided 00:25:11.261 Per-Namespace SMART Log: No 00:25:11.261 Asymmetric Namespace Access Log Page: Not Supported 00:25:11.261 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:11.261 Command Effects Log Page: Not Supported 00:25:11.261 Get Log Page Extended Data: Supported 00:25:11.261 Telemetry Log Pages: Not Supported 00:25:11.261 Persistent Event Log Pages: Not Supported 00:25:11.261 Supported Log Pages Log Page: May Support 00:25:11.261 Commands Supported & Effects Log Page: Not Supported 00:25:11.261 Feature Identifiers & Effects Log Page:May Support 00:25:11.261 NVMe-MI Commands & Effects Log Page: May Support 00:25:11.261 Data Area 4 for Telemetry Log: Not Supported 00:25:11.261 Error Log Page Entries Supported: 1 00:25:11.261 Keep Alive: Not Supported 00:25:11.261 00:25:11.261 NVM Command Set Attributes 00:25:11.261 ========================== 00:25:11.261 Submission Queue Entry Size 00:25:11.261 Max: 1 00:25:11.261 Min: 1 00:25:11.261 Completion Queue Entry Size 00:25:11.261 Max: 1 00:25:11.261 Min: 1 00:25:11.261 Number of Namespaces: 0 00:25:11.261 Compare Command: Not Supported 00:25:11.261 Write Uncorrectable Command: Not Supported 00:25:11.261 Dataset Management Command: Not Supported 00:25:11.261 Write Zeroes Command: Not Supported 00:25:11.261 Set Features Save Field: Not Supported 00:25:11.261 Reservations: Not Supported 00:25:11.261 Timestamp: Not Supported 00:25:11.261 Copy: Not Supported 00:25:11.261 Volatile Write Cache: Not Present 00:25:11.261 Atomic Write Unit (Normal): 1 00:25:11.261 Atomic Write Unit (PFail): 1 00:25:11.261 Atomic Compare & Write Unit: 1 00:25:11.261 Fused Compare & Write: Not Supported 00:25:11.261 Scatter-Gather List 00:25:11.261 SGL Command Set: Supported 00:25:11.261 SGL Keyed: Not Supported 00:25:11.261 SGL Bit Bucket Descriptor: Not Supported 00:25:11.261 SGL Metadata Pointer: Not Supported 00:25:11.261 Oversized SGL: Not Supported 00:25:11.261 SGL Metadata Address: Not Supported 00:25:11.261 SGL Offset: Supported 00:25:11.261 Transport SGL Data Block: Not Supported 00:25:11.261 Replay Protected Memory Block: Not Supported 00:25:11.261 00:25:11.261 Firmware Slot Information 00:25:11.261 ========================= 00:25:11.261 Active slot: 0 00:25:11.261 00:25:11.261 00:25:11.261 Error Log 00:25:11.261 ========= 00:25:11.261 00:25:11.261 Active Namespaces 00:25:11.261 ================= 00:25:11.261 Discovery Log Page 00:25:11.261 ================== 00:25:11.261 Generation Counter: 2 00:25:11.261 Number of Records: 2 00:25:11.261 Record Format: 0 00:25:11.261 00:25:11.261 Discovery Log Entry 0 00:25:11.261 ---------------------- 00:25:11.261 Transport Type: 3 (TCP) 00:25:11.261 Address Family: 1 (IPv4) 00:25:11.261 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:11.261 Entry Flags: 00:25:11.261 Duplicate Returned Information: 0 00:25:11.261 Explicit Persistent Connection Support for Discovery: 0 00:25:11.261 Transport Requirements: 00:25:11.261 Secure Channel: Not Specified 00:25:11.261 Port ID: 1 (0x0001) 00:25:11.261 Controller ID: 65535 (0xffff) 00:25:11.261 Admin Max SQ Size: 32 00:25:11.261 Transport Service Identifier: 4420 00:25:11.261 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:11.261 Transport Address: 10.0.0.1 00:25:11.261 Discovery Log Entry 1 00:25:11.261 ---------------------- 00:25:11.261 Transport Type: 3 (TCP) 00:25:11.261 Address Family: 1 (IPv4) 00:25:11.261 Subsystem Type: 2 (NVM Subsystem) 00:25:11.261 Entry Flags: 00:25:11.261 Duplicate Returned Information: 0 00:25:11.261 Explicit Persistent Connection Support for Discovery: 0 00:25:11.261 Transport Requirements: 00:25:11.261 Secure Channel: Not Specified 00:25:11.261 Port ID: 1 (0x0001) 00:25:11.261 Controller ID: 65535 (0xffff) 00:25:11.261 Admin Max SQ Size: 32 00:25:11.261 Transport Service Identifier: 4420 00:25:11.261 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:11.261 Transport Address: 10.0.0.1 00:25:11.261 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:11.261 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.261 get_feature(0x01) failed 00:25:11.261 get_feature(0x02) failed 00:25:11.261 get_feature(0x04) failed 00:25:11.261 ===================================================== 00:25:11.261 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:11.261 ===================================================== 00:25:11.261 Controller Capabilities/Features 00:25:11.261 ================================ 00:25:11.261 Vendor ID: 0000 00:25:11.261 Subsystem Vendor ID: 0000 00:25:11.261 Serial Number: df3994e72baed2ae1b51 00:25:11.261 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:11.261 Firmware Version: 6.7.0-68 00:25:11.261 Recommended Arb Burst: 6 00:25:11.261 IEEE OUI Identifier: 00 00 00 00:25:11.261 Multi-path I/O 00:25:11.261 May have multiple subsystem ports: Yes 00:25:11.261 May have multiple controllers: Yes 00:25:11.261 Associated with SR-IOV VF: No 00:25:11.261 Max Data Transfer Size: Unlimited 00:25:11.261 Max Number of Namespaces: 1024 00:25:11.261 Max Number of I/O Queues: 128 00:25:11.261 NVMe Specification Version (VS): 1.3 00:25:11.261 NVMe Specification Version (Identify): 1.3 00:25:11.261 Maximum Queue Entries: 1024 00:25:11.261 Contiguous Queues Required: No 00:25:11.261 Arbitration Mechanisms Supported 00:25:11.261 Weighted Round Robin: Not Supported 00:25:11.261 Vendor Specific: Not Supported 00:25:11.261 Reset Timeout: 7500 ms 00:25:11.261 Doorbell Stride: 4 bytes 00:25:11.261 NVM Subsystem Reset: Not Supported 00:25:11.261 Command Sets Supported 00:25:11.261 NVM Command Set: Supported 00:25:11.261 Boot Partition: Not Supported 00:25:11.261 Memory Page Size Minimum: 4096 bytes 00:25:11.261 Memory Page Size Maximum: 4096 bytes 00:25:11.261 Persistent Memory Region: Not Supported 00:25:11.261 Optional Asynchronous Events Supported 00:25:11.261 Namespace Attribute Notices: Supported 00:25:11.261 Firmware Activation Notices: Not Supported 00:25:11.261 ANA Change Notices: Supported 00:25:11.261 PLE Aggregate Log Change Notices: Not Supported 00:25:11.261 LBA Status Info Alert Notices: Not Supported 00:25:11.261 EGE Aggregate Log Change Notices: Not Supported 00:25:11.261 Normal NVM Subsystem Shutdown event: Not Supported 00:25:11.261 Zone Descriptor Change Notices: Not Supported 00:25:11.261 Discovery Log Change Notices: Not Supported 00:25:11.261 Controller Attributes 00:25:11.261 128-bit Host Identifier: Supported 00:25:11.261 Non-Operational Permissive Mode: Not Supported 00:25:11.261 NVM Sets: Not Supported 00:25:11.261 Read Recovery Levels: Not Supported 00:25:11.261 Endurance Groups: Not Supported 00:25:11.261 Predictable Latency Mode: Not Supported 00:25:11.261 Traffic Based Keep ALive: Supported 00:25:11.261 Namespace Granularity: Not Supported 00:25:11.261 SQ Associations: Not Supported 00:25:11.261 UUID List: Not Supported 00:25:11.261 Multi-Domain Subsystem: Not Supported 00:25:11.261 Fixed Capacity Management: Not Supported 00:25:11.261 Variable Capacity Management: Not Supported 00:25:11.261 Delete Endurance Group: Not Supported 00:25:11.261 Delete NVM Set: Not Supported 00:25:11.261 Extended LBA Formats Supported: Not Supported 00:25:11.261 Flexible Data Placement Supported: Not Supported 00:25:11.261 00:25:11.261 Controller Memory Buffer Support 00:25:11.261 ================================ 00:25:11.261 Supported: No 00:25:11.261 00:25:11.261 Persistent Memory Region Support 00:25:11.261 ================================ 00:25:11.261 Supported: No 00:25:11.261 00:25:11.261 Admin Command Set Attributes 00:25:11.261 ============================ 00:25:11.261 Security Send/Receive: Not Supported 00:25:11.261 Format NVM: Not Supported 00:25:11.261 Firmware Activate/Download: Not Supported 00:25:11.261 Namespace Management: Not Supported 00:25:11.261 Device Self-Test: Not Supported 00:25:11.261 Directives: Not Supported 00:25:11.261 NVMe-MI: Not Supported 00:25:11.261 Virtualization Management: Not Supported 00:25:11.261 Doorbell Buffer Config: Not Supported 00:25:11.261 Get LBA Status Capability: Not Supported 00:25:11.261 Command & Feature Lockdown Capability: Not Supported 00:25:11.261 Abort Command Limit: 4 00:25:11.261 Async Event Request Limit: 4 00:25:11.261 Number of Firmware Slots: N/A 00:25:11.261 Firmware Slot 1 Read-Only: N/A 00:25:11.261 Firmware Activation Without Reset: N/A 00:25:11.261 Multiple Update Detection Support: N/A 00:25:11.261 Firmware Update Granularity: No Information Provided 00:25:11.261 Per-Namespace SMART Log: Yes 00:25:11.261 Asymmetric Namespace Access Log Page: Supported 00:25:11.261 ANA Transition Time : 10 sec 00:25:11.261 00:25:11.261 Asymmetric Namespace Access Capabilities 00:25:11.261 ANA Optimized State : Supported 00:25:11.261 ANA Non-Optimized State : Supported 00:25:11.261 ANA Inaccessible State : Supported 00:25:11.261 ANA Persistent Loss State : Supported 00:25:11.261 ANA Change State : Supported 00:25:11.261 ANAGRPID is not changed : No 00:25:11.261 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:11.261 00:25:11.261 ANA Group Identifier Maximum : 128 00:25:11.261 Number of ANA Group Identifiers : 128 00:25:11.261 Max Number of Allowed Namespaces : 1024 00:25:11.261 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:11.261 Command Effects Log Page: Supported 00:25:11.261 Get Log Page Extended Data: Supported 00:25:11.261 Telemetry Log Pages: Not Supported 00:25:11.261 Persistent Event Log Pages: Not Supported 00:25:11.261 Supported Log Pages Log Page: May Support 00:25:11.261 Commands Supported & Effects Log Page: Not Supported 00:25:11.261 Feature Identifiers & Effects Log Page:May Support 00:25:11.261 NVMe-MI Commands & Effects Log Page: May Support 00:25:11.261 Data Area 4 for Telemetry Log: Not Supported 00:25:11.261 Error Log Page Entries Supported: 128 00:25:11.261 Keep Alive: Supported 00:25:11.261 Keep Alive Granularity: 1000 ms 00:25:11.261 00:25:11.261 NVM Command Set Attributes 00:25:11.261 ========================== 00:25:11.261 Submission Queue Entry Size 00:25:11.261 Max: 64 00:25:11.261 Min: 64 00:25:11.261 Completion Queue Entry Size 00:25:11.261 Max: 16 00:25:11.261 Min: 16 00:25:11.261 Number of Namespaces: 1024 00:25:11.261 Compare Command: Not Supported 00:25:11.261 Write Uncorrectable Command: Not Supported 00:25:11.261 Dataset Management Command: Supported 00:25:11.261 Write Zeroes Command: Supported 00:25:11.261 Set Features Save Field: Not Supported 00:25:11.261 Reservations: Not Supported 00:25:11.261 Timestamp: Not Supported 00:25:11.261 Copy: Not Supported 00:25:11.261 Volatile Write Cache: Present 00:25:11.261 Atomic Write Unit (Normal): 1 00:25:11.261 Atomic Write Unit (PFail): 1 00:25:11.261 Atomic Compare & Write Unit: 1 00:25:11.261 Fused Compare & Write: Not Supported 00:25:11.261 Scatter-Gather List 00:25:11.261 SGL Command Set: Supported 00:25:11.261 SGL Keyed: Not Supported 00:25:11.261 SGL Bit Bucket Descriptor: Not Supported 00:25:11.261 SGL Metadata Pointer: Not Supported 00:25:11.261 Oversized SGL: Not Supported 00:25:11.261 SGL Metadata Address: Not Supported 00:25:11.261 SGL Offset: Supported 00:25:11.261 Transport SGL Data Block: Not Supported 00:25:11.261 Replay Protected Memory Block: Not Supported 00:25:11.261 00:25:11.261 Firmware Slot Information 00:25:11.261 ========================= 00:25:11.261 Active slot: 0 00:25:11.261 00:25:11.261 Asymmetric Namespace Access 00:25:11.261 =========================== 00:25:11.261 Change Count : 0 00:25:11.261 Number of ANA Group Descriptors : 1 00:25:11.261 ANA Group Descriptor : 0 00:25:11.261 ANA Group ID : 1 00:25:11.261 Number of NSID Values : 1 00:25:11.261 Change Count : 0 00:25:11.261 ANA State : 1 00:25:11.261 Namespace Identifier : 1 00:25:11.261 00:25:11.261 Commands Supported and Effects 00:25:11.261 ============================== 00:25:11.261 Admin Commands 00:25:11.261 -------------- 00:25:11.261 Get Log Page (02h): Supported 00:25:11.261 Identify (06h): Supported 00:25:11.261 Abort (08h): Supported 00:25:11.261 Set Features (09h): Supported 00:25:11.261 Get Features (0Ah): Supported 00:25:11.261 Asynchronous Event Request (0Ch): Supported 00:25:11.261 Keep Alive (18h): Supported 00:25:11.261 I/O Commands 00:25:11.261 ------------ 00:25:11.261 Flush (00h): Supported 00:25:11.261 Write (01h): Supported LBA-Change 00:25:11.261 Read (02h): Supported 00:25:11.261 Write Zeroes (08h): Supported LBA-Change 00:25:11.261 Dataset Management (09h): Supported 00:25:11.261 00:25:11.261 Error Log 00:25:11.261 ========= 00:25:11.261 Entry: 0 00:25:11.261 Error Count: 0x3 00:25:11.261 Submission Queue Id: 0x0 00:25:11.261 Command Id: 0x5 00:25:11.261 Phase Bit: 0 00:25:11.261 Status Code: 0x2 00:25:11.261 Status Code Type: 0x0 00:25:11.261 Do Not Retry: 1 00:25:11.261 Error Location: 0x28 00:25:11.261 LBA: 0x0 00:25:11.261 Namespace: 0x0 00:25:11.261 Vendor Log Page: 0x0 00:25:11.261 ----------- 00:25:11.261 Entry: 1 00:25:11.261 Error Count: 0x2 00:25:11.261 Submission Queue Id: 0x0 00:25:11.261 Command Id: 0x5 00:25:11.261 Phase Bit: 0 00:25:11.261 Status Code: 0x2 00:25:11.261 Status Code Type: 0x0 00:25:11.261 Do Not Retry: 1 00:25:11.261 Error Location: 0x28 00:25:11.261 LBA: 0x0 00:25:11.261 Namespace: 0x0 00:25:11.261 Vendor Log Page: 0x0 00:25:11.261 ----------- 00:25:11.261 Entry: 2 00:25:11.261 Error Count: 0x1 00:25:11.261 Submission Queue Id: 0x0 00:25:11.261 Command Id: 0x4 00:25:11.261 Phase Bit: 0 00:25:11.261 Status Code: 0x2 00:25:11.261 Status Code Type: 0x0 00:25:11.261 Do Not Retry: 1 00:25:11.261 Error Location: 0x28 00:25:11.261 LBA: 0x0 00:25:11.261 Namespace: 0x0 00:25:11.261 Vendor Log Page: 0x0 00:25:11.261 00:25:11.261 Number of Queues 00:25:11.261 ================ 00:25:11.261 Number of I/O Submission Queues: 128 00:25:11.261 Number of I/O Completion Queues: 128 00:25:11.261 00:25:11.261 ZNS Specific Controller Data 00:25:11.261 ============================ 00:25:11.261 Zone Append Size Limit: 0 00:25:11.261 00:25:11.261 00:25:11.261 Active Namespaces 00:25:11.261 ================= 00:25:11.261 get_feature(0x05) failed 00:25:11.261 Namespace ID:1 00:25:11.261 Command Set Identifier: NVM (00h) 00:25:11.261 Deallocate: Supported 00:25:11.261 Deallocated/Unwritten Error: Not Supported 00:25:11.261 Deallocated Read Value: Unknown 00:25:11.261 Deallocate in Write Zeroes: Not Supported 00:25:11.261 Deallocated Guard Field: 0xFFFF 00:25:11.261 Flush: Supported 00:25:11.261 Reservation: Not Supported 00:25:11.261 Namespace Sharing Capabilities: Multiple Controllers 00:25:11.261 Size (in LBAs): 3125627568 (1490GiB) 00:25:11.261 Capacity (in LBAs): 3125627568 (1490GiB) 00:25:11.261 Utilization (in LBAs): 3125627568 (1490GiB) 00:25:11.262 UUID: 20e2a74e-c9ce-4ad5-81f4-091ea2487f3a 00:25:11.262 Thin Provisioning: Not Supported 00:25:11.262 Per-NS Atomic Units: Yes 00:25:11.262 Atomic Boundary Size (Normal): 0 00:25:11.262 Atomic Boundary Size (PFail): 0 00:25:11.262 Atomic Boundary Offset: 0 00:25:11.262 NGUID/EUI64 Never Reused: No 00:25:11.262 ANA group ID: 1 00:25:11.262 Namespace Write Protected: No 00:25:11.262 Number of LBA Formats: 1 00:25:11.262 Current LBA Format: LBA Format #00 00:25:11.262 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:11.262 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:11.262 rmmod nvme_tcp 00:25:11.262 rmmod nvme_fabrics 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:11.262 18:20:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:13.790 18:20:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:16.321 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:16.321 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:17.698 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:25:17.698 00:25:17.698 real 0m16.451s 00:25:17.698 user 0m3.983s 00:25:17.698 sys 0m8.198s 00:25:17.698 18:20:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.698 18:20:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:17.698 ************************************ 00:25:17.698 END TEST nvmf_identify_kernel_target 00:25:17.698 ************************************ 00:25:17.698 18:20:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:17.698 18:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:17.698 18:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.698 18:20:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.698 ************************************ 00:25:17.698 START TEST nvmf_auth_host 00:25:17.698 ************************************ 00:25:17.698 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:17.956 * Looking for test storage... 00:25:17.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.956 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:17.957 18:20:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:23.221 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:23.221 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.221 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:23.222 Found net devices under 0000:86:00.0: cvl_0_0 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:23.222 Found net devices under 0000:86:00.1: cvl_0_1 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.222 18:20:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:23.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:25:23.222 00:25:23.222 --- 10.0.0.2 ping statistics --- 00:25:23.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.222 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:25:23.222 00:25:23.222 --- 10.0.0.1 ping statistics --- 00:25:23.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.222 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3533819 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3533819 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3533819 ']' 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:23.222 18:20:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9877ce16cdadd77fc80690d000c5e538 00:25:24.154 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.5tr 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9877ce16cdadd77fc80690d000c5e538 0 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9877ce16cdadd77fc80690d000c5e538 0 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9877ce16cdadd77fc80690d000c5e538 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.5tr 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.5tr 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.5tr 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2de27808196149e0bb6378dfc495867dfec8de310e4823fd539912b536a7e8fe 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.8Wp 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2de27808196149e0bb6378dfc495867dfec8de310e4823fd539912b536a7e8fe 3 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2de27808196149e0bb6378dfc495867dfec8de310e4823fd539912b536a7e8fe 3 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2de27808196149e0bb6378dfc495867dfec8de310e4823fd539912b536a7e8fe 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:24.155 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.8Wp 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.8Wp 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.8Wp 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6033d59c4e5b9ae5f8be7d49f02e02a743dabdf28156a791 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.owQ 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6033d59c4e5b9ae5f8be7d49f02e02a743dabdf28156a791 0 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6033d59c4e5b9ae5f8be7d49f02e02a743dabdf28156a791 0 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6033d59c4e5b9ae5f8be7d49f02e02a743dabdf28156a791 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.owQ 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.owQ 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.owQ 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:24.413 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9e27825fc0fdb167f51b088af6338ae1bd1be2b9c89456ff 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.RNU 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9e27825fc0fdb167f51b088af6338ae1bd1be2b9c89456ff 2 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9e27825fc0fdb167f51b088af6338ae1bd1be2b9c89456ff 2 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9e27825fc0fdb167f51b088af6338ae1bd1be2b9c89456ff 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.RNU 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.RNU 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.RNU 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5c83ab0b8705162110a64a1e5f9f1c3b 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.RSx 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5c83ab0b8705162110a64a1e5f9f1c3b 1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5c83ab0b8705162110a64a1e5f9f1c3b 1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5c83ab0b8705162110a64a1e5f9f1c3b 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.RSx 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.RSx 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.RSx 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=805434049973088775d03ecce68480ef 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Nae 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 805434049973088775d03ecce68480ef 1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 805434049973088775d03ecce68480ef 1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=805434049973088775d03ecce68480ef 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:24.414 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Nae 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Nae 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Nae 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5ceede591b5c48d15e71aeff8f6829ad3986fde7f0637e93 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vS5 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5ceede591b5c48d15e71aeff8f6829ad3986fde7f0637e93 2 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5ceede591b5c48d15e71aeff8f6829ad3986fde7f0637e93 2 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5ceede591b5c48d15e71aeff8f6829ad3986fde7f0637e93 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vS5 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vS5 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.vS5 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90c619af4621e5e3bbfb99240b3c9edc 00:25:24.672 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.HRl 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90c619af4621e5e3bbfb99240b3c9edc 0 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90c619af4621e5e3bbfb99240b3c9edc 0 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90c619af4621e5e3bbfb99240b3c9edc 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.HRl 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.HRl 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.HRl 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e9020a0a2170366f9fb21fc249293d6c95ceebe1070c402a5c9bd22b894b74f 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.2Aj 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e9020a0a2170366f9fb21fc249293d6c95ceebe1070c402a5c9bd22b894b74f 3 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e9020a0a2170366f9fb21fc249293d6c95ceebe1070c402a5c9bd22b894b74f 3 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e9020a0a2170366f9fb21fc249293d6c95ceebe1070c402a5c9bd22b894b74f 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.2Aj 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.2Aj 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.2Aj 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3533819 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 3533819 ']' 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:24.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:24.673 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.5tr 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.8Wp ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.8Wp 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.owQ 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.RNU ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.RNU 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.RSx 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Nae ]] 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Nae 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.931 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.vS5 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.HRl ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.HRl 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.2Aj 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:24.932 18:20:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:27.474 Waiting for block devices as requested 00:25:27.474 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:25:27.474 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:27.474 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:27.474 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:27.732 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:27.732 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:27.732 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:27.989 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:27.989 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:27.989 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:27.989 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:28.247 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:28.247 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:28.247 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:28.247 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:28.504 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:28.504 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:29.070 No valid GPT data, bailing 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:29.070 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:29.331 00:25:29.331 Discovery Log Number of Records 2, Generation counter 2 00:25:29.331 =====Discovery Log Entry 0====== 00:25:29.331 trtype: tcp 00:25:29.331 adrfam: ipv4 00:25:29.331 subtype: current discovery subsystem 00:25:29.331 treq: not specified, sq flow control disable supported 00:25:29.331 portid: 1 00:25:29.331 trsvcid: 4420 00:25:29.331 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:29.331 traddr: 10.0.0.1 00:25:29.331 eflags: none 00:25:29.331 sectype: none 00:25:29.331 =====Discovery Log Entry 1====== 00:25:29.331 trtype: tcp 00:25:29.331 adrfam: ipv4 00:25:29.331 subtype: nvme subsystem 00:25:29.331 treq: not specified, sq flow control disable supported 00:25:29.331 portid: 1 00:25:29.331 trsvcid: 4420 00:25:29.331 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:29.331 traddr: 10.0.0.1 00:25:29.331 eflags: none 00:25:29.331 sectype: none 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.331 nvme0n1 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.331 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.628 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.629 nvme0n1 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.629 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.887 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:29.887 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.887 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.887 nvme0n1 00:25:29.887 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.888 18:20:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.146 nvme0n1 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.146 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.147 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.405 nvme0n1 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.405 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.406 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:30.406 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.406 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.663 nvme0n1 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.663 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.921 nvme0n1 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.921 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.922 18:20:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.180 nvme0n1 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.180 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 nvme0n1 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.438 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.696 nvme0n1 00:25:31.696 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.697 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.955 nvme0n1 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.955 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.956 18:20:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.214 nvme0n1 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.214 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.472 nvme0n1 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.472 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.730 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.731 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.731 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.988 nvme0n1 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.988 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.989 18:20:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.247 nvme0n1 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.247 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.505 nvme0n1 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.505 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.071 nvme0n1 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.071 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.072 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.072 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.072 18:20:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.072 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.329 nvme0n1 00:25:34.329 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.330 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.330 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.330 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.330 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.330 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.587 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.587 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.587 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.587 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.588 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.846 nvme0n1 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.846 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.847 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.847 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:34.847 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.847 18:20:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.413 nvme0n1 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.413 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.671 nvme0n1 00:25:35.671 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.671 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.671 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.671 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.671 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.671 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.929 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.930 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.930 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.930 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.930 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.930 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.930 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:35.930 18:20:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.496 nvme0n1 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.496 18:20:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.062 nvme0n1 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.062 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.063 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.628 nvme0n1 00:25:37.628 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.628 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.628 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.628 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.628 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.628 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.886 18:20:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 nvme0n1 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.452 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:38.453 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.019 nvme0n1 00:25:39.019 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.019 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.019 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.019 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.019 18:20:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.019 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.278 nvme0n1 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.278 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.536 nvme0n1 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.536 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.537 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.795 nvme0n1 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.795 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.796 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.796 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.796 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.796 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.796 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.796 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:39.796 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 nvme0n1 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.054 18:20:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 nvme0n1 00:25:40.054 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.054 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.054 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.054 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.054 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.313 nvme0n1 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.313 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.571 nvme0n1 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.571 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:40.829 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.830 nvme0n1 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.830 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.088 18:20:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.088 nvme0n1 00:25:41.088 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.088 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.088 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.088 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.088 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.088 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 nvme0n1 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.346 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.604 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.862 nvme0n1 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:41.862 18:20:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.120 nvme0n1 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:42.120 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.121 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.379 nvme0n1 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.379 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.637 nvme0n1 00:25:42.637 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.637 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.637 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.637 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.637 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.637 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:42.896 18:20:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.154 nvme0n1 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.154 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.412 nvme0n1 00:25:43.412 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.412 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.412 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.412 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.412 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.670 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.928 nvme0n1 00:25:43.928 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.928 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:43.929 18:20:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:43.929 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.186 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.443 nvme0n1 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:44.443 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.008 nvme0n1 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.008 18:20:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.265 nvme0n1 00:25:45.265 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.265 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.265 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.265 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.265 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.265 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.522 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:45.523 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.088 nvme0n1 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.088 18:20:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.088 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.684 nvme0n1 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:46.684 18:20:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.249 nvme0n1 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:47.249 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.250 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.878 nvme0n1 00:25:47.878 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.879 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.137 18:20:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.703 nvme0n1 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.703 nvme0n1 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.703 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.962 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.963 nvme0n1 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.963 18:20:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:48.963 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.222 nvme0n1 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.222 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.481 nvme0n1 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.481 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.740 nvme0n1 00:25:49.740 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.740 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.740 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.740 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.741 nvme0n1 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.741 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:50.000 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.001 18:20:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.001 nvme0n1 00:25:50.001 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.001 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.001 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.001 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.001 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.001 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.260 nvme0n1 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.260 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.518 nvme0n1 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.518 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.776 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.776 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.776 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.777 nvme0n1 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:50.777 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.036 18:20:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.295 nvme0n1 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.295 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.554 nvme0n1 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.554 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.813 nvme0n1 00:25:51.813 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:51.814 18:20:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.073 nvme0n1 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.073 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.332 nvme0n1 00:25:52.332 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.332 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.332 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.332 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.332 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.332 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:52.590 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.591 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.850 nvme0n1 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:52.850 18:20:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.418 nvme0n1 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.418 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.419 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.678 nvme0n1 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:53.678 18:20:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.246 nvme0n1 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.246 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.505 nvme0n1 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.505 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTg3N2NlMTZjZGFkZDc3ZmM4MDY5MGQwMDBjNWU1MzhGYBq2: 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: ]] 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MmRlMjc4MDgxOTYxNDllMGJiNjM3OGRmYzQ5NTg2N2RmZWM4ZGUzMTBlNDgyM2ZkNTM5OTEyYjUzNmE3ZThmZRiuwEg=: 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:54.764 18:20:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.331 nvme0n1 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.331 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.332 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.898 nvme0n1 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NWM4M2FiMGI4NzA1MTYyMTEwYTY0YTFlNWY5ZjFjM2JfH8UA: 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ODA1NDM0MDQ5OTczMDg4Nzc1ZDAzZWNjZTY4NDgwZWbydunZ: 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:55.898 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:55.899 18:20:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.465 nvme0n1 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWNlZWRlNTkxYjVjNDhkMTVlNzFhZWZmOGY2ODI5YWQzOTg2ZmRlN2YwNjM3ZTkzDCOlCg==: 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OTBjNjE5YWY0NjIxZTVlM2JiZmI5OTI0MGIzYzllZGMNO4tj: 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:56.466 18:20:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.032 nvme0n1 00:25:57.032 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.032 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.032 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.032 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.032 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.032 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U5MDIwYTBhMjE3MDM2NmY5ZmIyMWZjMjQ5MjkzZDZjOTVjZWViZTEwNzBjNDAyYTVjOWJkMjJiODk0Yjc0ZnZ837U=: 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.291 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.858 nvme0n1 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NjAzM2Q1OWM0ZTViOWFlNWY4YmU3ZDQ5ZjAyZTAyYTc0M2RhYmRmMjgxNTZhNzkxg4dGkg==: 00:25:57.858 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OWUyNzgyNWZjMGZkYjE2N2Y1MWIwODhhZjYzMzhhZTFiZDFiZTJiOWM4OTQ1NmZmSxM3tw==: 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.859 request: 00:25:57.859 { 00:25:57.859 "name": "nvme0", 00:25:57.859 "trtype": "tcp", 00:25:57.859 "traddr": "10.0.0.1", 00:25:57.859 "adrfam": "ipv4", 00:25:57.859 "trsvcid": "4420", 00:25:57.859 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:57.859 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:57.859 "prchk_reftag": false, 00:25:57.859 "prchk_guard": false, 00:25:57.859 "hdgst": false, 00:25:57.859 "ddgst": false, 00:25:57.859 "method": "bdev_nvme_attach_controller", 00:25:57.859 "req_id": 1 00:25:57.859 } 00:25:57.859 Got JSON-RPC error response 00:25:57.859 response: 00:25:57.859 { 00:25:57.859 "code": -5, 00:25:57.859 "message": "Input/output error" 00:25:57.859 } 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:57.859 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.119 request: 00:25:58.119 { 00:25:58.119 "name": "nvme0", 00:25:58.119 "trtype": "tcp", 00:25:58.119 "traddr": "10.0.0.1", 00:25:58.119 "adrfam": "ipv4", 00:25:58.119 "trsvcid": "4420", 00:25:58.119 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:58.119 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:58.119 "prchk_reftag": false, 00:25:58.119 "prchk_guard": false, 00:25:58.119 "hdgst": false, 00:25:58.119 "ddgst": false, 00:25:58.119 "dhchap_key": "key2", 00:25:58.119 "method": "bdev_nvme_attach_controller", 00:25:58.119 "req_id": 1 00:25:58.119 } 00:25:58.119 Got JSON-RPC error response 00:25:58.119 response: 00:25:58.119 { 00:25:58.119 "code": -5, 00:25:58.119 "message": "Input/output error" 00:25:58.119 } 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.119 18:20:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.119 request: 00:25:58.119 { 00:25:58.119 "name": "nvme0", 00:25:58.119 "trtype": "tcp", 00:25:58.119 "traddr": "10.0.0.1", 00:25:58.119 "adrfam": "ipv4", 00:25:58.119 "trsvcid": "4420", 00:25:58.119 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:58.119 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:58.119 "prchk_reftag": false, 00:25:58.119 "prchk_guard": false, 00:25:58.119 "hdgst": false, 00:25:58.119 "ddgst": false, 00:25:58.119 "dhchap_key": "key1", 00:25:58.119 "dhchap_ctrlr_key": "ckey2", 00:25:58.119 "method": "bdev_nvme_attach_controller", 00:25:58.119 "req_id": 1 00:25:58.119 } 00:25:58.119 Got JSON-RPC error response 00:25:58.119 response: 00:25:58.119 { 00:25:58.119 "code": -5, 00:25:58.119 "message": "Input/output error" 00:25:58.119 } 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.119 rmmod nvme_tcp 00:25:58.119 rmmod nvme_fabrics 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3533819 ']' 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3533819 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 3533819 ']' 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 3533819 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3533819 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3533819' 00:25:58.119 killing process with pid 3533819 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 3533819 00:25:58.119 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 3533819 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:58.378 18:20:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:00.911 18:20:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:02.809 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:02.809 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:03.067 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:04.442 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:26:04.442 18:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.5tr /tmp/spdk.key-null.owQ /tmp/spdk.key-sha256.RSx /tmp/spdk.key-sha384.vS5 /tmp/spdk.key-sha512.2Aj /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:04.442 18:20:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:06.974 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:06.974 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:06.974 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:06.974 00:26:06.974 real 0m49.286s 00:26:06.974 user 0m43.505s 00:26:06.974 sys 0m11.351s 00:26:06.974 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:06.974 18:21:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:06.974 ************************************ 00:26:06.974 END TEST nvmf_auth_host 00:26:06.974 ************************************ 00:26:06.974 18:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:06.974 18:21:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:06.974 18:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:06.974 18:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:06.974 18:21:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.233 ************************************ 00:26:07.233 START TEST nvmf_digest 00:26:07.233 ************************************ 00:26:07.233 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:07.233 * Looking for test storage... 00:26:07.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.233 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.233 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.234 18:21:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:13.890 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:13.891 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:13.891 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:13.891 Found net devices under 0000:86:00.0: cvl_0_0 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:13.891 Found net devices under 0000:86:00.1: cvl_0_1 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:13.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:26:13.891 00:26:13.891 --- 10.0.0.2 ping statistics --- 00:26:13.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.891 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:13.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:26:13.891 00:26:13.891 --- 10.0.0.1 ping statistics --- 00:26:13.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.891 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:13.891 ************************************ 00:26:13.891 START TEST nvmf_digest_clean 00:26:13.891 ************************************ 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3547003 00:26:13.891 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3547003 00:26:13.892 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:13.892 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3547003 ']' 00:26:13.892 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.892 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.892 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.892 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.892 18:21:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.892 [2024-07-24 18:21:06.009756] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:13.892 [2024-07-24 18:21:06.009802] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.892 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.892 [2024-07-24 18:21:06.069760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.892 [2024-07-24 18:21:06.142470] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.892 [2024-07-24 18:21:06.142516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.892 [2024-07-24 18:21:06.142523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.892 [2024-07-24 18:21:06.142528] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.892 [2024-07-24 18:21:06.142533] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.892 [2024-07-24 18:21:06.142555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.892 null0 00:26:13.892 [2024-07-24 18:21:06.934757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.892 [2024-07-24 18:21:06.958934] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3547241 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3547241 /var/tmp/bperf.sock 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3547241 ']' 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.892 18:21:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:14.151 [2024-07-24 18:21:07.005929] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:14.151 [2024-07-24 18:21:07.005975] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3547241 ] 00:26:14.151 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.151 [2024-07-24 18:21:07.058541] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.151 [2024-07-24 18:21:07.141563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.087 18:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:15.087 18:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:15.087 18:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:15.087 18:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:15.087 18:21:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.087 18:21:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.088 18:21:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.655 nvme0n1 00:26:15.655 18:21:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.655 18:21:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.655 Running I/O for 2 seconds... 00:26:17.558 00:26:17.558 Latency(us) 00:26:17.558 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.558 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:17.558 nvme0n1 : 2.00 26578.61 103.82 0.00 0.00 4811.69 2527.82 11234.74 00:26:17.558 =================================================================================================================== 00:26:17.558 Total : 26578.61 103.82 0.00 0.00 4811.69 2527.82 11234.74 00:26:17.558 0 00:26:17.558 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.558 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:17.558 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.558 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.558 | select(.opcode=="crc32c") 00:26:17.558 | "\(.module_name) \(.executed)"' 00:26:17.558 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3547241 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3547241 ']' 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3547241 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3547241 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3547241' 00:26:17.817 killing process with pid 3547241 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3547241 00:26:17.817 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.817 00:26:17.817 Latency(us) 00:26:17.817 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.817 =================================================================================================================== 00:26:17.817 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.817 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3547241 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3548325 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3548325 /var/tmp/bperf.sock 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3548325 ']' 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:18.077 18:21:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.077 [2024-07-24 18:21:11.010740] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:18.077 [2024-07-24 18:21:11.010787] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548325 ] 00:26:18.077 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.077 Zero copy mechanism will not be used. 00:26:18.077 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.077 [2024-07-24 18:21:11.064943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.077 [2024-07-24 18:21:11.131824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.016 18:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:19.016 18:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:19.016 18:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:19.016 18:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:19.016 18:21:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:19.016 18:21:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.016 18:21:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.275 nvme0n1 00:26:19.534 18:21:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.534 18:21:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.534 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.534 Zero copy mechanism will not be used. 00:26:19.534 Running I/O for 2 seconds... 00:26:21.435 00:26:21.435 Latency(us) 00:26:21.435 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.435 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:21.435 nvme0n1 : 2.00 5573.47 696.68 0.00 0.00 2868.08 940.13 8051.57 00:26:21.435 =================================================================================================================== 00:26:21.435 Total : 5573.47 696.68 0.00 0.00 2868.08 940.13 8051.57 00:26:21.435 0 00:26:21.435 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.435 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.435 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.435 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.435 | select(.opcode=="crc32c") 00:26:21.435 | "\(.module_name) \(.executed)"' 00:26:21.435 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3548325 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3548325 ']' 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3548325 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3548325 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3548325' 00:26:21.694 killing process with pid 3548325 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3548325 00:26:21.694 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.694 00:26:21.694 Latency(us) 00:26:21.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.694 =================================================================================================================== 00:26:21.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.694 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3548325 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3548955 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3548955 /var/tmp/bperf.sock 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3548955 ']' 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:21.953 18:21:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:21.953 [2024-07-24 18:21:14.903008] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:21.953 [2024-07-24 18:21:14.903057] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3548955 ] 00:26:21.953 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.953 [2024-07-24 18:21:14.956737] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.953 [2024-07-24 18:21:15.034670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.890 18:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:22.890 18:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:22.890 18:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:22.890 18:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:22.890 18:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:22.890 18:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:22.890 18:21:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.458 nvme0n1 00:26:23.458 18:21:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:23.458 18:21:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.458 Running I/O for 2 seconds... 00:26:25.362 00:26:25.362 Latency(us) 00:26:25.362 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.362 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:25.362 nvme0n1 : 2.00 28979.72 113.20 0.00 0.00 4411.27 1560.38 14917.24 00:26:25.362 =================================================================================================================== 00:26:25.362 Total : 28979.72 113.20 0.00 0.00 4411.27 1560.38 14917.24 00:26:25.362 0 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:25.621 | select(.opcode=="crc32c") 00:26:25.621 | "\(.module_name) \(.executed)"' 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3548955 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3548955 ']' 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3548955 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3548955 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3548955' 00:26:25.621 killing process with pid 3548955 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3548955 00:26:25.621 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.621 00:26:25.621 Latency(us) 00:26:25.621 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.621 =================================================================================================================== 00:26:25.621 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.621 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3548955 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3549506 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3549506 /var/tmp/bperf.sock 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 3549506 ']' 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:25.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:25.880 18:21:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:25.880 [2024-07-24 18:21:18.903514] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:25.880 [2024-07-24 18:21:18.903564] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3549506 ] 00:26:25.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.880 Zero copy mechanism will not be used. 00:26:25.880 EAL: No free 2048 kB hugepages reported on node 1 00:26:25.880 [2024-07-24 18:21:18.959001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.139 [2024-07-24 18:21:19.036474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.705 18:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:26.705 18:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:26:26.705 18:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:26.705 18:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:26.705 18:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:26.963 18:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:26.963 18:21:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.222 nvme0n1 00:26:27.222 18:21:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:27.222 18:21:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:27.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.222 Zero copy mechanism will not be used. 00:26:27.222 Running I/O for 2 seconds... 00:26:29.757 00:26:29.757 Latency(us) 00:26:29.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.757 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:29.757 nvme0n1 : 2.00 6668.96 833.62 0.00 0.00 2394.95 1451.15 4649.94 00:26:29.757 =================================================================================================================== 00:26:29.757 Total : 6668.96 833.62 0.00 0.00 2394.95 1451.15 4649.94 00:26:29.757 0 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:29.757 | select(.opcode=="crc32c") 00:26:29.757 | "\(.module_name) \(.executed)"' 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3549506 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3549506 ']' 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3549506 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3549506 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3549506' 00:26:29.757 killing process with pid 3549506 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3549506 00:26:29.757 Received shutdown signal, test time was about 2.000000 seconds 00:26:29.757 00:26:29.757 Latency(us) 00:26:29.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.757 =================================================================================================================== 00:26:29.757 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3549506 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3547003 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 3547003 ']' 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 3547003 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3547003 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3547003' 00:26:29.757 killing process with pid 3547003 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 3547003 00:26:29.757 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 3547003 00:26:30.016 00:26:30.016 real 0m16.974s 00:26:30.016 user 0m32.414s 00:26:30.016 sys 0m4.555s 00:26:30.016 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:30.016 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:30.016 ************************************ 00:26:30.016 END TEST nvmf_digest_clean 00:26:30.016 ************************************ 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:30.017 ************************************ 00:26:30.017 START TEST nvmf_digest_error 00:26:30.017 ************************************ 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3550229 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3550229 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3550229 ']' 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.017 18:21:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:30.017 [2024-07-24 18:21:23.045194] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:30.017 [2024-07-24 18:21:23.045233] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.017 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.275 [2024-07-24 18:21:23.102896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.275 [2024-07-24 18:21:23.180778] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.275 [2024-07-24 18:21:23.180813] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.275 [2024-07-24 18:21:23.180820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.275 [2024-07-24 18:21:23.180825] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.275 [2024-07-24 18:21:23.180831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.275 [2024-07-24 18:21:23.180847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.840 [2024-07-24 18:21:23.862835] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.840 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.098 null0 00:26:31.098 [2024-07-24 18:21:23.953130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.098 [2024-07-24 18:21:23.977307] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3550472 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3550472 /var/tmp/bperf.sock 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3550472 ']' 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:31.098 18:21:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.098 [2024-07-24 18:21:24.012889] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:31.098 [2024-07-24 18:21:24.012929] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3550472 ] 00:26:31.098 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.098 [2024-07-24 18:21:24.067726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.098 [2024-07-24 18:21:24.139242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.032 18:21:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.289 nvme0n1 00:26:32.289 18:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:32.289 18:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.289 18:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.547 18:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.547 18:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:32.547 18:21:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.547 Running I/O for 2 seconds... 00:26:32.547 [2024-07-24 18:21:25.480431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.547 [2024-07-24 18:21:25.480464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.547 [2024-07-24 18:21:25.480475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.547 [2024-07-24 18:21:25.491425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.547 [2024-07-24 18:21:25.491449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.547 [2024-07-24 18:21:25.491458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.547 [2024-07-24 18:21:25.500062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.547 [2024-07-24 18:21:25.500083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.547 [2024-07-24 18:21:25.500092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.547 [2024-07-24 18:21:25.511444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.547 [2024-07-24 18:21:25.511465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.547 [2024-07-24 18:21:25.511472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.547 [2024-07-24 18:21:25.521296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.547 [2024-07-24 18:21:25.521317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.547 [2024-07-24 18:21:25.521325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.547 [2024-07-24 18:21:25.529679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.547 [2024-07-24 18:21:25.529698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.547 [2024-07-24 18:21:25.529706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.547 [2024-07-24 18:21:25.541110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.547 [2024-07-24 18:21:25.541130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.547 [2024-07-24 18:21:25.541138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.547 [2024-07-24 18:21:25.552485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.552509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.552517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.548 [2024-07-24 18:21:25.560869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.560888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.560895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.548 [2024-07-24 18:21:25.572137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.572156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.572168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.548 [2024-07-24 18:21:25.582685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.582703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:9207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.582711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.548 [2024-07-24 18:21:25.591015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.591034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.591042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.548 [2024-07-24 18:21:25.601153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.601173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.601180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.548 [2024-07-24 18:21:25.612366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.612385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.612394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.548 [2024-07-24 18:21:25.620439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.548 [2024-07-24 18:21:25.620457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.548 [2024-07-24 18:21:25.620465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.631102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.631121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:24520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.631129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.640138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.640156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.640163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.651170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.651189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.651196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.659927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.659949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.659957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.671221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.671240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.671247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.683039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.683057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.683065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.691283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.691302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.691309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.700904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.700923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.700930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.710157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.710176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.710183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.719226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.719245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.719253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.730790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.730809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.730817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.738979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.738998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.739006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.750871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.750890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.750898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.762976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.762996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:10083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.763004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.771417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.771436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:15495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.771444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.783066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.783085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.783093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.795527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.795546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.795553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.803166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.803184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.803192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.814865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.814883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.814891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.826630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.826650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.826657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.838116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.838138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:18191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.838146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.850394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.807 [2024-07-24 18:21:25.850412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.807 [2024-07-24 18:21:25.850420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.807 [2024-07-24 18:21:25.859517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.808 [2024-07-24 18:21:25.859535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.808 [2024-07-24 18:21:25.859543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.808 [2024-07-24 18:21:25.869708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.808 [2024-07-24 18:21:25.869728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.808 [2024-07-24 18:21:25.869735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.808 [2024-07-24 18:21:25.880249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:32.808 [2024-07-24 18:21:25.880267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.808 [2024-07-24 18:21:25.880275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.066 [2024-07-24 18:21:25.892501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.066 [2024-07-24 18:21:25.892520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.066 [2024-07-24 18:21:25.892528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.066 [2024-07-24 18:21:25.901456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.066 [2024-07-24 18:21:25.901474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.901481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.913542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.913560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.913568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.925755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.925773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.925782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.937062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.937080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.937088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.945882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.945900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.945908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.958142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.958163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.958170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.970681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.970700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.970708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.980374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.980394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.980403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.988871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.988891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.988898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:25.998963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:25.998984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:25.998992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.009486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.009512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.009522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.018106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.018126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.018137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.028171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.028191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.028198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.037422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.037441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.037449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.047175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.047194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.047202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.056060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.056080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:23432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.056087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.067778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.067797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.067805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.077009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.077028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.077036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.085809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.085829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.085836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.094810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.094830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.094837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.104305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.104328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.104336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.112264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.112283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.112291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.123542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.123562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.123569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.133696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.133716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.133723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.067 [2024-07-24 18:21:26.142599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.067 [2024-07-24 18:21:26.142619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.067 [2024-07-24 18:21:26.142627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.376 [2024-07-24 18:21:26.150918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.376 [2024-07-24 18:21:26.150937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.376 [2024-07-24 18:21:26.150945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.160863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.160883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.160890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.170672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.170691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.170699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.179882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.179902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.179910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.188354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.188374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:11404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.188381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.200371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.200391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.200399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.208234] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.208254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:69 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.208262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.218594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.218613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.218620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.228660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.228680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.228688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.237345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.237365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.237373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.248051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.248070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.248079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.256673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.256693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.256700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.268509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.268530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.268541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.276376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.276395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.276403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.287032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.287051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.287059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.295948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.295967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.295975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.305134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.305153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:24127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.305160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.316340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.316360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.377 [2024-07-24 18:21:26.316367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.377 [2024-07-24 18:21:26.328388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.377 [2024-07-24 18:21:26.328408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.328415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.336228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.336247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.336254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.348258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.348278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.348286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.359449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.359469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.359477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.368059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.368079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.368087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.379167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.379186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.379194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.389677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.389696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.389703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.398918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.398938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.398945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.406849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.406867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.406875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.417053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.417071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.417079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.425466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.425485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.425499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.434512] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.434531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.434542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.444634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.444653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.444661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.378 [2024-07-24 18:21:26.455237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.378 [2024-07-24 18:21:26.455257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.378 [2024-07-24 18:21:26.455265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.464073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.464092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.464099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.473782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.473801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.473809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.484136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.484156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.484164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.492322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.492341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.492349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.503246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.503265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:15233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.503273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.511832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.511851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:16272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.511858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.522697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.522719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.522726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.533163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.533182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:12098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.533190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.542421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.542440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.542448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.552919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.552938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.552947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.564261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.564280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.564288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.573221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.573240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.573248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.583873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.583892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:21856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.583899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.592716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.592735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.592743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.607404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.607422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.607430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.616297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.616316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.616324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.637 [2024-07-24 18:21:26.627200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.637 [2024-07-24 18:21:26.627218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.637 [2024-07-24 18:21:26.627226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.634714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.634733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.634740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.645993] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.646012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.646020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.656743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.656761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.656769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.665289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.665308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.665315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.676496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.676515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.676522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.684451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.684469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.684477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.694049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.694067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.694081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.703152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.703171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.703178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.638 [2024-07-24 18:21:26.712770] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.638 [2024-07-24 18:21:26.712789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.638 [2024-07-24 18:21:26.712797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.721330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.721348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.721356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.731644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.731663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.731671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.740648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.740666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.740674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.749699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.749718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.749726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.758651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.758670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.758677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.767803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.767821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.767829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.776791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.776809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.776817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.784777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.784795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.784803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.794640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.794659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.794667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.802631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.802650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.802657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.813852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.813871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.813878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.825431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.825450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.825458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.833478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.833501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.833508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.845552] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.845571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.845579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.853569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.853588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:20238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.853599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.864870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.864890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.864897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.875912] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.875931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.875938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.883567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.897 [2024-07-24 18:21:26.883586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.897 [2024-07-24 18:21:26.883594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.897 [2024-07-24 18:21:26.894076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.894095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:24744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.894103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.903382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.903401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.903409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.912526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.912544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.912552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.920434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.920453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.920461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.929855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.929874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.929882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.939662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.939683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.939691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.948211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.948230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.948237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.956997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.957022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.957030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.966679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.966698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.966706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.898 [2024-07-24 18:21:26.975744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:33.898 [2024-07-24 18:21:26.975767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.898 [2024-07-24 18:21:26.975775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.156 [2024-07-24 18:21:26.984671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.156 [2024-07-24 18:21:26.984690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.156 [2024-07-24 18:21:26.984697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:26.994741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:26.994760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:26.994767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.002394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.002413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.002420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.014286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.014305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:11326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.014312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.023801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.023820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.023828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.032282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.032301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.032308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.043462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.043481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.043488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.052516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.052535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.052543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.060503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.060521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.060545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.071555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.071574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.071581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.078940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.078959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.078966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.089565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.089588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.089595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.099987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.100006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.100017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.107902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.107921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.107928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.118420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.118439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.118446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.128129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.128147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.128155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.137617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.137635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.137643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.145930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.145949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.145957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.155722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.155741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.155748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.163845] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.163864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.163871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.175454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.175474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.175482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.186281] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.186300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.186308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.194063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.194082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.194089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.203239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.203257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.203264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.211591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.211609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:14615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.211617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.222470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.222488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.222502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.157 [2024-07-24 18:21:27.233140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.157 [2024-07-24 18:21:27.233159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.157 [2024-07-24 18:21:27.233166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.416 [2024-07-24 18:21:27.241547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.416 [2024-07-24 18:21:27.241582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.416 [2024-07-24 18:21:27.241589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.416 [2024-07-24 18:21:27.250779] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.416 [2024-07-24 18:21:27.250798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.416 [2024-07-24 18:21:27.250805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.416 [2024-07-24 18:21:27.260774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.416 [2024-07-24 18:21:27.260793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.416 [2024-07-24 18:21:27.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.416 [2024-07-24 18:21:27.269217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.416 [2024-07-24 18:21:27.269236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.416 [2024-07-24 18:21:27.269243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.416 [2024-07-24 18:21:27.279908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.416 [2024-07-24 18:21:27.279926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.416 [2024-07-24 18:21:27.279934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.288376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.288395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.288402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.298352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.298371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.298378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.307595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.307614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.307621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.317100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.317118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.317125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.325879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.325897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.325904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.334425] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.334443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.334451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.345392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.345414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.345422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.354540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.354559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.354566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.363326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.363345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.363354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.372300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.372319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.372327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.382553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.382573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.382580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.391602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.391621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.391629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.401778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.401796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.401804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.411836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.411855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.411863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.420796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.420816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.420823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.430320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.430340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.430348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.442010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.442031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17643 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.442038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.452130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.452151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.452159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.460058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.460077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.460085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.417 [2024-07-24 18:21:27.469296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b5a4f0) 00:26:34.417 [2024-07-24 18:21:27.469315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.417 [2024-07-24 18:21:27.469323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.676 00:26:34.676 Latency(us) 00:26:34.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.676 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:34.676 nvme0n1 : 2.04 25496.15 99.59 0.00 0.00 4917.38 2465.40 44689.31 00:26:34.676 =================================================================================================================== 00:26:34.676 Total : 25496.15 99.59 0.00 0.00 4917.38 2465.40 44689.31 00:26:34.676 0 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:34.676 | .driver_specific 00:26:34.676 | .nvme_error 00:26:34.676 | .status_code 00:26:34.676 | .command_transient_transport_error' 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 204 > 0 )) 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3550472 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3550472 ']' 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3550472 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3550472 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3550472' 00:26:34.676 killing process with pid 3550472 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3550472 00:26:34.676 Received shutdown signal, test time was about 2.000000 seconds 00:26:34.676 00:26:34.676 Latency(us) 00:26:34.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.676 =================================================================================================================== 00:26:34.676 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.676 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3550472 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3551172 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3551172 /var/tmp/bperf.sock 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3551172 ']' 00:26:34.934 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:34.935 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:34.935 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:34.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:34.935 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:34.935 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:34.935 18:21:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:34.935 [2024-07-24 18:21:27.956669] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:34.935 [2024-07-24 18:21:27.956718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551172 ] 00:26:34.935 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:34.935 Zero copy mechanism will not be used. 00:26:34.935 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.935 [2024-07-24 18:21:28.009318] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.192 [2024-07-24 18:21:28.081056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.855 18:21:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.421 nvme0n1 00:26:36.421 18:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:36.421 18:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.421 18:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.421 18:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.421 18:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:36.421 18:21:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.421 Zero copy mechanism will not be used. 00:26:36.421 Running I/O for 2 seconds... 00:26:36.421 [2024-07-24 18:21:29.471974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.421 [2024-07-24 18:21:29.472007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.421 [2024-07-24 18:21:29.472017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.421 [2024-07-24 18:21:29.478644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.421 [2024-07-24 18:21:29.478667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.421 [2024-07-24 18:21:29.478675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.421 [2024-07-24 18:21:29.485326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.421 [2024-07-24 18:21:29.485345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.421 [2024-07-24 18:21:29.485353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.421 [2024-07-24 18:21:29.491801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.421 [2024-07-24 18:21:29.491824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.421 [2024-07-24 18:21:29.491831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.421 [2024-07-24 18:21:29.498130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.421 [2024-07-24 18:21:29.498150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.421 [2024-07-24 18:21:29.498158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.504288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.504308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.504316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.510308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.510328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.510336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.515728] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.515749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.515757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.521441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.521470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.527141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.527162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.527169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.532708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.532728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.532736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.537908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.537928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.537936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.543339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.543359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.543367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.680 [2024-07-24 18:21:29.548715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.680 [2024-07-24 18:21:29.548735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.680 [2024-07-24 18:21:29.548743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.554212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.554233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.554240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.559667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.559687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.559694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.565019] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.565039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.565047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.570353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.570373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.570380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.575673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.575692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.575700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.581113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.581133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.581141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.586595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.586615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.586628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.592036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.592055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.592063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.597571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.597590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.597597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.602978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.602998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.603006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.608332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.608352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.608359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.613698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.613718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.613725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.619067] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.619087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.619094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.624590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.624610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.624618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.630158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.630177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.630184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.635601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.635623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.635631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.640975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.640994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.641002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.646244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.646262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.646270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.651539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.651558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.651566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.656794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.656814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.656822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.662085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.662105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.662113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.667455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.667475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.667482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.672695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.672715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.672722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.677949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.677969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.677976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.683469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.683488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.683502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.688949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.688969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.681 [2024-07-24 18:21:29.688977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.681 [2024-07-24 18:21:29.694223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.681 [2024-07-24 18:21:29.694243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.694250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.699457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.699478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.699486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.704716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.704736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.704744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.710034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.710053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.710061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.715556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.715575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.715583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.721150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.721170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.721178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.726861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.726881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.726891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.732462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.732481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.732489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.737909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.737930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.737938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.743521] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.743540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.743548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.749214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.749234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.749241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.754793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.754811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.754819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.682 [2024-07-24 18:21:29.760254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.682 [2024-07-24 18:21:29.760274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.682 [2024-07-24 18:21:29.760282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.765869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.765890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.765898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.771690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.771710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.771719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.777757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.777781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.777789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.783519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.783539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.783547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.789118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.789138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.789146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.794721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.794742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.794750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.800169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.800190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.800198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.805472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.805498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.805506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.810808] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.810828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.810836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.816288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.816307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.816315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.822183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.822204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.822211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.942 [2024-07-24 18:21:29.829058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.942 [2024-07-24 18:21:29.829078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.942 [2024-07-24 18:21:29.829086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.835648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.835668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.835675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.842153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.842174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.842181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.848486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.848512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.848520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.855018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.855038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.855046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.861661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.861682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.861690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.867575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.867595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.867603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.874142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.874162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.874169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.881117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.881138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.881149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.887593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.887612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.887620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.894246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.894266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.894274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.900850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.900870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.900878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.907853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.907873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.907881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.914409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.914429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.914437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.920985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.921005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.921013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.927344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.927364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.927371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.933648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.933668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.933676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.939869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.939891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.939898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.945977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.945997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.946005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.952135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.952156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.952164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.957952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.957972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.957980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.963959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.963977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.963985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.969514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.969534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.969541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.975143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.975163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.975171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.980970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.980990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.981006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.986606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.986626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.986634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.992486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.992512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.943 [2024-07-24 18:21:29.992537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.943 [2024-07-24 18:21:29.997907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.943 [2024-07-24 18:21:29.997927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.944 [2024-07-24 18:21:29.997934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.944 [2024-07-24 18:21:30.003178] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.944 [2024-07-24 18:21:30.003199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.944 [2024-07-24 18:21:30.003207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.944 [2024-07-24 18:21:30.008643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.944 [2024-07-24 18:21:30.008663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.944 [2024-07-24 18:21:30.008670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.944 [2024-07-24 18:21:30.014152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.944 [2024-07-24 18:21:30.014173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.944 [2024-07-24 18:21:30.014181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.944 [2024-07-24 18:21:30.020288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:36.944 [2024-07-24 18:21:30.020307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.944 [2024-07-24 18:21:30.020315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.026248] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.026268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.026276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.032971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.032996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.033007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.039276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.039297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.039309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.045360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.045381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.045390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.051421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.051442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.051450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.057331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.057352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.057360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.063160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.063182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.063190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.068890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.068910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.068917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.074388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.074408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.074416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.079924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.079944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.079952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.085524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.085543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.085551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.091177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.091198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.091206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.096752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.096772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.096780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.204 [2024-07-24 18:21:30.102252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.204 [2024-07-24 18:21:30.102273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.204 [2024-07-24 18:21:30.102280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.108000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.108020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.108028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.113611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.113630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.113638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.119537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.119557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.119564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.125337] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.125357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.125365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.131384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.131405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.131413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.137294] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.137313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.137323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.142820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.142840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.142848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.149704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.149725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.149732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.156570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.156590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.156598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.163318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.163341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.163348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.169936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.169957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.169965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.177320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.177341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.177348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.185905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.185925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.185933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.193674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.193694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.193702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.201407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.201431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.201439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.209534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.209554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.209562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.218303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.218324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.218332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.226199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.226220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.226228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.232972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.232992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.233000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.241901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.241923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.241931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.250643] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.250664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.250672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.258998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.259019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.259027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.267860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.267882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.267890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.277607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.277629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.277638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.205 [2024-07-24 18:21:30.285561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.205 [2024-07-24 18:21:30.285585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.205 [2024-07-24 18:21:30.285594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.465 [2024-07-24 18:21:30.293934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.465 [2024-07-24 18:21:30.293957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.465 [2024-07-24 18:21:30.293965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.465 [2024-07-24 18:21:30.302987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.303010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.303018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.311438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.311460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.311469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.319423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.319445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.319454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.327815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.327836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.327845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.336701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.336722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.336730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.345619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.345641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.345653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.354841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.354862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.354870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.362957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.362979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.362988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.371251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.371271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.371279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.378608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.378629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.378636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.385379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.385400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.385407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.391818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.391840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.391848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.398188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.398208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.398216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.404590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.404611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.404618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.410997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.411025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.411034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.417328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.417351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.417358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.424025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.424047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.424056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.430468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.430489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.430505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.436837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.436858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.436866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.443389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.443411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.443418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.449527] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.449548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.449557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.455797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.455817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.455825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.461817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.461837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.461845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.468265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.468287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.468295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.474587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.474608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.474616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.466 [2024-07-24 18:21:30.480850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.466 [2024-07-24 18:21:30.480873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.466 [2024-07-24 18:21:30.480881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.487050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.487072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.487080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.492934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.492955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.492963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.498594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.498615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.498622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.504265] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.504286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.504294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.510154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.510176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.510183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.515824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.515845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.515857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.521758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.521779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.521786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.527719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.527741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.527749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.533629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.533650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.533658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.539629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.539650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.539657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.467 [2024-07-24 18:21:30.545507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.467 [2024-07-24 18:21:30.545528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.467 [2024-07-24 18:21:30.545535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.551550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.551572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.551580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.557574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.557594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.557601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.563453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.563474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.563482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.569640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.569665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.569673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.575302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.575323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.575331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.581300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.581322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.581330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.587117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.587138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.587145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.593173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.593194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.593202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.598788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.598809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.598816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.604307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.604328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.604336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.727 [2024-07-24 18:21:30.610116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.727 [2024-07-24 18:21:30.610136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.727 [2024-07-24 18:21:30.610144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.615618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.615638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.615646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.619230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.619250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.619258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.623325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.623346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.623354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.628812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.628833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.628841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.634384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.634404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.634411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.639113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.639134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.639141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.644393] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.644413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.644421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.649645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.649665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.649673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.655084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.655105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.655112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.660390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.660411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.660422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.665936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.665956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.665965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.671472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.671498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.671506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.677034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.677055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.677062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.682466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.682486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.682499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.688361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.688382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.688389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.694155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.694176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.694183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.699511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.699531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.699538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.705212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.705232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.705240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.710783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.710806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.710814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.716689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.716709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.716717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.722736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.722756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.722764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.729314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.729334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.729341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.735193] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.735213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.735220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.741157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.741177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.741184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.746789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.746808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.746816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.752192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.752212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.752219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.728 [2024-07-24 18:21:30.757602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.728 [2024-07-24 18:21:30.757622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.728 [2024-07-24 18:21:30.757629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.763731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.763751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.763758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.769806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.769827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.769835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.775447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.775467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.775475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.778446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.778464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.778471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.783882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.783902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.783909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.789440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.789459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.789467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.794984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.795003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.795010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.800419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.800439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.800446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.729 [2024-07-24 18:21:30.805960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.729 [2024-07-24 18:21:30.805980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.729 [2024-07-24 18:21:30.805991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.989 [2024-07-24 18:21:30.811505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.989 [2024-07-24 18:21:30.811524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.989 [2024-07-24 18:21:30.811532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.989 [2024-07-24 18:21:30.817441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.989 [2024-07-24 18:21:30.817460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.989 [2024-07-24 18:21:30.817468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.989 [2024-07-24 18:21:30.823073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.823092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.823100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.828667] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.828687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.828695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.834433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.834452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.834460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.840007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.840027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.840035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.845401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.845420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.845428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.850741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.850760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.850768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.855957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.855979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.855987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.861144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.861164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.861172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.866290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.866309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.866317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.871453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.871472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.871480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.876692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.876712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.876720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.882004] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.882023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.882031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.887381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.887400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.887408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.892877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.892896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.892903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.898387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.898405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.898416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.903826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.903846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.903853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.909087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.909106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.909116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.914453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.914472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.914480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.919703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.919722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.919730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.924963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.924981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.924989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.930226] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.930244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.930252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.935499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.935517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.935525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.940989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.941007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.941015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.946564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.946585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.946593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.952196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.952214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.952221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.957749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.957767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.957775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.963147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.990 [2024-07-24 18:21:30.963166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.990 [2024-07-24 18:21:30.963174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.990 [2024-07-24 18:21:30.968665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:30.968684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:30.968692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:30.974165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:30.974184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:30.974191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:30.979761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:30.979780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:30.979787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:30.985432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:30.985451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:30.985458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:30.990915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:30.990934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:30.990941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:30.996420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:30.996439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:30.996447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.001957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.001976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.001984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.007428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.007447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.007454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.012877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.012896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.012903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.018245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.018264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.018271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.023677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.023697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.023705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.029041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.029059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.029067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.034538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.034557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.034564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.040160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.040179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.040189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.045800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.045819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.045827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.051351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.051369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.051377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.056796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.056815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.056822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.062233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.062252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.062260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.991 [2024-07-24 18:21:31.067721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:37.991 [2024-07-24 18:21:31.067740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.991 [2024-07-24 18:21:31.067747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.073185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.073204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.073211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.078645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.078663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.078671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.083991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.084010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.084018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.089259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.089281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.089289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.094563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.094589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.099898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.099917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.099924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.105194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.105213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.105221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.110432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.110451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.110459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.115637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.115656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.115664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.120884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.120904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.120911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.126167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.251 [2024-07-24 18:21:31.126186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.251 [2024-07-24 18:21:31.126194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.251 [2024-07-24 18:21:31.131421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.131440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.131448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.137591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.137610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.137618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.144823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.144843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.144851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.151692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.151711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.151719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.158251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.158270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.158277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.164631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.164651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.164658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.172349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.172370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.172378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.179877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.179898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.179906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.187568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.187588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.187596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.194684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.194704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.194716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.201926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.201947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.201955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.207663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.207684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.207691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.214217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.214238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.214245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.221776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.221797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.221805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.228446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.228467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.228475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.235977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.235997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.236005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.243814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.243836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.243843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.251520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.251540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.251548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.258560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.258581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.258588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.265523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.265542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.265551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.271862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.271883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.271890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.277966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.277986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.277994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.283842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.283862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.283870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.289709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.289728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.289736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.295553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.295573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.295580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.301334] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.301354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.301362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.252 [2024-07-24 18:21:31.307021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.252 [2024-07-24 18:21:31.307041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.252 [2024-07-24 18:21:31.307052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.253 [2024-07-24 18:21:31.312708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.253 [2024-07-24 18:21:31.312727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.253 [2024-07-24 18:21:31.312735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.253 [2024-07-24 18:21:31.318367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.253 [2024-07-24 18:21:31.318387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.253 [2024-07-24 18:21:31.318395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.253 [2024-07-24 18:21:31.324040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.253 [2024-07-24 18:21:31.324060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.253 [2024-07-24 18:21:31.324067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.253 [2024-07-24 18:21:31.329794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.253 [2024-07-24 18:21:31.329814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.253 [2024-07-24 18:21:31.329822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.512 [2024-07-24 18:21:31.335339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.512 [2024-07-24 18:21:31.335359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.512 [2024-07-24 18:21:31.335367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.512 [2024-07-24 18:21:31.340951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.512 [2024-07-24 18:21:31.340971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.512 [2024-07-24 18:21:31.340979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.512 [2024-07-24 18:21:31.346644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.512 [2024-07-24 18:21:31.346664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.512 [2024-07-24 18:21:31.346672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.512 [2024-07-24 18:21:31.352203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.512 [2024-07-24 18:21:31.352224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.512 [2024-07-24 18:21:31.352231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.512 [2024-07-24 18:21:31.357681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.357705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.357712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.363215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.363235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.363242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.368749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.368769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.368776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.374284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.374304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.374312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.379821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.379840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.379847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.385142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.385162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.385169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.390453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.390473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.390481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.395783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.395803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.395810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.401159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.401179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.401186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.406649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.406669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.406677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.412182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.412202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.412210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.417659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.417679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.417687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.422983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.423003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.423010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.428261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.428281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.428288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.433576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.433596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.433604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.438809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.438831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.438838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.444082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.444103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.444113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.449448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.449468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.449479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.454777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.454798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.454806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.460045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.460064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.460072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.513 [2024-07-24 18:21:31.465254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd1b030) 00:26:38.513 [2024-07-24 18:21:31.465273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.513 [2024-07-24 18:21:31.465281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.513 00:26:38.513 Latency(us) 00:26:38.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.513 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:38.513 nvme0n1 : 2.00 5194.67 649.33 0.00 0.00 3076.92 655.36 9611.95 00:26:38.513 =================================================================================================================== 00:26:38.513 Total : 5194.67 649.33 0.00 0.00 3076.92 655.36 9611.95 00:26:38.513 0 00:26:38.513 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.513 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.513 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.513 | .driver_specific 00:26:38.513 | .nvme_error 00:26:38.513 | .status_code 00:26:38.513 | .command_transient_transport_error' 00:26:38.513 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 335 > 0 )) 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3551172 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3551172 ']' 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3551172 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3551172 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3551172' 00:26:38.773 killing process with pid 3551172 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3551172 00:26:38.773 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.773 00:26:38.773 Latency(us) 00:26:38.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.773 =================================================================================================================== 00:26:38.773 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.773 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3551172 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3551746 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3551746 /var/tmp/bperf.sock 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3551746 ']' 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:39.032 18:21:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.032 [2024-07-24 18:21:31.936147] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:39.032 [2024-07-24 18:21:31.936197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3551746 ] 00:26:39.032 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.032 [2024-07-24 18:21:31.991972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.032 [2024-07-24 18:21:32.062666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.966 18:21:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.226 nvme0n1 00:26:40.226 18:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:40.226 18:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.226 18:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.226 18:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.226 18:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:40.226 18:21:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.226 Running I/O for 2 seconds... 00:26:40.226 [2024-07-24 18:21:33.290163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.226 [2024-07-24 18:21:33.290352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.226 [2024-07-24 18:21:33.290382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.226 [2024-07-24 18:21:33.299667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.226 [2024-07-24 18:21:33.299829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.226 [2024-07-24 18:21:33.299850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.483 [2024-07-24 18:21:33.309395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.483 [2024-07-24 18:21:33.309569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.483 [2024-07-24 18:21:33.309588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.483 [2024-07-24 18:21:33.318861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.483 [2024-07-24 18:21:33.319020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.483 [2024-07-24 18:21:33.319038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.483 [2024-07-24 18:21:33.328156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.483 [2024-07-24 18:21:33.328317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.483 [2024-07-24 18:21:33.328334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.483 [2024-07-24 18:21:33.337458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.483 [2024-07-24 18:21:33.337640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.483 [2024-07-24 18:21:33.337662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.483 [2024-07-24 18:21:33.346764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.346919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.346938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.355981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.356138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.356155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.365196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.365364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.365381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.374576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.374751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.374769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.383856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.384011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.384028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.393150] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.393326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.393344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.402456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.402635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.402653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.411795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.411950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.411966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.421079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.421242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.421260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.430416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.430596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.430613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.439701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.439875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.439892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.448956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.449129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.449146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.458211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.458385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.458401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.467502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.467657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.467675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.476785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.476942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.476958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.485997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.486153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.486169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.495253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.495424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.495441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.504526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.504684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.504700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.513736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.513890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.513906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.522991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.523165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.523182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.532251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.532422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.532439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.541540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.541698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.541715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.550955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.551122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.551139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.484 [2024-07-24 18:21:33.560168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.484 [2024-07-24 18:21:33.560322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.484 [2024-07-24 18:21:33.560338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.569846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.570007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.570023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.579168] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.579324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.579343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.588409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.588589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.588606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.597718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.597872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.597889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.606968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.607122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.607139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.616260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.616415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.616431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.625505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.625680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.625697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.634817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.634972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.634989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.643978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.644135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.644153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.742 [2024-07-24 18:21:33.653254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.742 [2024-07-24 18:21:33.653427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.742 [2024-07-24 18:21:33.653444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.662471] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.662637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.662654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.671703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.671858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.671874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.680940] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.681111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.681128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.690176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.690330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.690347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.699368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.699521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.699538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.708622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.708797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.708814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.717915] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.718088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.718105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.727163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.727317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:14647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.727333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.736362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.736516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.736532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.745602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.745777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.745794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.754856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.755011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.755027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.764053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.764205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.764221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.773255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.773425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.773441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.782540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.782713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.782730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.791755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.791927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.791944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.801192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.801362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.810457] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.810637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.810654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.743 [2024-07-24 18:21:33.819686] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:40.743 [2024-07-24 18:21:33.819840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.743 [2024-07-24 18:21:33.819860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.829293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.829455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.829471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.838588] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.838769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.838785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.847862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.848015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.848031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.857062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.857217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.857233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.866304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.866476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.866497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.875545] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.875698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.875714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.884753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.884908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.884923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.894000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.894173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.894189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.903448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.903611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.903631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.912659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.002 [2024-07-24 18:21:33.912813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.002 [2024-07-24 18:21:33.912830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.002 [2024-07-24 18:21:33.921894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.922067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.922083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.931190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.931344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.931360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.940394] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.940571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.940588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.949672] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.949845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.949861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.958916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.959069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.959085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.968142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.968294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.968311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.977364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.977518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.977551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.986616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.986802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.986818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:33.995868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:33.996022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:33.996038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.005065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.005240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.005258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.014333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.014508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.014525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.023565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.023723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.023740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.032874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.033029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.033045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.042098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.042272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.042289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.051560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.051717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.051733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.060869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.061024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.061041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.070260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.070433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.070450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.003 [2024-07-24 18:21:34.079532] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.003 [2024-07-24 18:21:34.079687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.003 [2024-07-24 18:21:34.079704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.089153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.089314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.089331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.098443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.098624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.098642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.107689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.107844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.107860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.116889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.117056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.117072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.126125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.126298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.126315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.135533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.135707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.135723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.145016] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.145189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.145209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.154294] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.154465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.154482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.163549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.163706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.163722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.172831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.172984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.173001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.182079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.182252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.182269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.262 [2024-07-24 18:21:34.191304] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.262 [2024-07-24 18:21:34.191459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.262 [2024-07-24 18:21:34.191475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.200597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.200752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.200769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.209837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.210009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.210025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.219079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.219235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.219251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.228313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.228473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.228488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.237586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.237745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.237761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.246902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.247058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.247075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.256101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.256257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.256274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.265330] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.265487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.265507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.274576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.274759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.274776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.283814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.283969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.283987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.293026] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.293181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.293197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.302450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.302614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.302632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.311660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.311814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.311831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.320840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.321007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.321024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.330082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.330253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.330270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.263 [2024-07-24 18:21:34.339336] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.263 [2024-07-24 18:21:34.339496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.263 [2024-07-24 18:21:34.339513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.348967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.349127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.349144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.358321] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.358481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.358502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.367707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.367879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.367896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.377069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.377242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.377259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.386305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.386480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.386506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.395547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.395703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.395719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.404756] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.404910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.404926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.414059] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.414232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.414249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.522 [2024-07-24 18:21:34.423295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.522 [2024-07-24 18:21:34.423449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.522 [2024-07-24 18:21:34.423465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.432535] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.432690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.432707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.441741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.441915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.441931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.451054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.451211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.451227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.460237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.460392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.460408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.469485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.469665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.469685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.478736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.478889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.478906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.487917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.488070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.488086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.497111] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.497267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.497284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.506387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.506563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.506580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.515656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.515838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.515854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.524879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.525037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.525054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.534086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.534240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.534257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.543349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.543522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.543540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.552740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.552899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.552916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.561953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.562108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.562124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.571214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.571386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.571402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.580498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.580672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.580688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.589733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.589890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.589906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.523 [2024-07-24 18:21:34.599008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.523 [2024-07-24 18:21:34.599162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.523 [2024-07-24 18:21:34.599179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.608666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.608828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.608846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.618159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.618318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.618335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.627478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.627662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.627679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.636814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.636970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.636986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.646066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.646240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.646257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.655384] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.655562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.655579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.664681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.664864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.664881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.673984] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.674140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.674157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.683199] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.683355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.683372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.692447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.692660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.692678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.701771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.701944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.701960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.711034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.782 [2024-07-24 18:21:34.711189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.782 [2024-07-24 18:21:34.711208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.782 [2024-07-24 18:21:34.720249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.720404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.720420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.729472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.729635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.729652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.738695] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.738851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.738867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.747912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.748066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.748083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.757175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.757348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.757365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.766467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.766648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.766665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.775717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.775873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.775889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.784922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.785078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.785095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.794189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.794367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.794384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.803649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.803805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.803821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.812859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.813014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.813030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.822106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.822259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.822276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.831412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.831588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.831605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.840675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.840849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.840866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.849971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.850145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.850161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.783 [2024-07-24 18:21:34.859225] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:41.783 [2024-07-24 18:21:34.859400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.783 [2024-07-24 18:21:34.859417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.868980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.869141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.869157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.878278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.878434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.878451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.887530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.887706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.887723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.896812] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.896985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.897001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.906250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.906425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.906443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.915525] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.915700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.915718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.924791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.924963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.924980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.934068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.934221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.934238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.943301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.943456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.943473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.952563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.952715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.952736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.961768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.961941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.961958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.971038] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.971209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.971226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.980349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.980529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.980546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.989607] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.989791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.989808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:34.998868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:34.999023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:34.999039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:35.008083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:35.008239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:35.008255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:35.017397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:35.017581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:35.017597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:35.026638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:35.026813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:35.026830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:35.035910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:35.036068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:35.036085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:35.045128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:35.045276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:35.045293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.042 [2024-07-24 18:21:35.054479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.042 [2024-07-24 18:21:35.054656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.042 [2024-07-24 18:21:35.054673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.043 [2024-07-24 18:21:35.063779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.043 [2024-07-24 18:21:35.063951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.043 [2024-07-24 18:21:35.063968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.043 [2024-07-24 18:21:35.073089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.043 [2024-07-24 18:21:35.073262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.043 [2024-07-24 18:21:35.073279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.043 [2024-07-24 18:21:35.082474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.043 [2024-07-24 18:21:35.082636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.043 [2024-07-24 18:21:35.082653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.043 [2024-07-24 18:21:35.091704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.043 [2024-07-24 18:21:35.091878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.043 [2024-07-24 18:21:35.091895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.043 [2024-07-24 18:21:35.100971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.043 [2024-07-24 18:21:35.101125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.043 [2024-07-24 18:21:35.101142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.043 [2024-07-24 18:21:35.110175] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.043 [2024-07-24 18:21:35.110331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.043 [2024-07-24 18:21:35.110348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.043 [2024-07-24 18:21:35.119411] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.043 [2024-07-24 18:21:35.119591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.043 [2024-07-24 18:21:35.119608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.129061] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.129221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.129238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.138320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.138474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.138497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.147586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.147759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.147777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.156854] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.157010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.157026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.166063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.166217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.166234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.175383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.175546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:22871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.175563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.184643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.184825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.184842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.193943] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.194096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.194113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.203183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.203356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.203373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.212623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.212781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.212798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.221995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.222168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.222185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.231368] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.231547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.231565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.240699] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.240856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.240872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.249980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.250133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.250149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.259154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.259312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.259328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.302 [2024-07-24 18:21:35.268412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.302 [2024-07-24 18:21:35.268762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:19109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.302 [2024-07-24 18:21:35.268780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.303 [2024-07-24 18:21:35.277927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22da420) with pdu=0x2000190fda78 00:26:42.303 [2024-07-24 18:21:35.278082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.303 [2024-07-24 18:21:35.278102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:42.303 00:26:42.303 Latency(us) 00:26:42.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.303 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:42.303 nvme0n1 : 2.00 27450.68 107.23 0.00 0.00 4654.77 4493.90 9861.61 00:26:42.303 =================================================================================================================== 00:26:42.303 Total : 27450.68 107.23 0.00 0.00 4654.77 4493.90 9861.61 00:26:42.303 0 00:26:42.303 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:42.303 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:42.303 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:42.303 | .driver_specific 00:26:42.303 | .nvme_error 00:26:42.303 | .status_code 00:26:42.303 | .command_transient_transport_error' 00:26:42.303 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 215 > 0 )) 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3551746 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3551746 ']' 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3551746 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3551746 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3551746' 00:26:42.561 killing process with pid 3551746 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3551746 00:26:42.561 Received shutdown signal, test time was about 2.000000 seconds 00:26:42.561 00:26:42.561 Latency(us) 00:26:42.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.561 =================================================================================================================== 00:26:42.561 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:42.561 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3551746 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3552355 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3552355 /var/tmp/bperf.sock 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 3552355 ']' 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:42.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:42.820 18:21:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.820 [2024-07-24 18:21:35.758399] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:42.820 [2024-07-24 18:21:35.758448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3552355 ] 00:26:42.820 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:42.820 Zero copy mechanism will not be used. 00:26:42.820 EAL: No free 2048 kB hugepages reported on node 1 00:26:42.820 [2024-07-24 18:21:35.814205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:42.820 [2024-07-24 18:21:35.892418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:43.753 18:21:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.011 nvme0n1 00:26:44.011 18:21:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:44.011 18:21:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.011 18:21:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.011 18:21:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.011 18:21:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:44.011 18:21:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.271 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.271 Zero copy mechanism will not be used. 00:26:44.271 Running I/O for 2 seconds... 00:26:44.271 [2024-07-24 18:21:37.168000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.168373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.168398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.173067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.173139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.173159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.178187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.178570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.178592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.182985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.183337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.183357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.187643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.188023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.188043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.192337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.192718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.192737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.196981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.197338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.197357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.201667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.202020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.202039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.206897] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.207263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.207282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.211716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.212075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.212093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.216283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.216644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.216662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.220928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.221299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.221317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.225605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.225964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.225982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.230131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.230498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.230516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.234805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.235171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.235189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.239385] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.239753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.239771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.243901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.244265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.244287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.248477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.248829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.248847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.253209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.253587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.253605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.257779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.258130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.258149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.262405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.262771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.262790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.266922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.267282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.271 [2024-07-24 18:21:37.267300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.271 [2024-07-24 18:21:37.271461] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.271 [2024-07-24 18:21:37.271827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.271845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.276706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.277081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.277100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.281927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.282270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.282289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.286601] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.286971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.286990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.291291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.291650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.291669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.296470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.296842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.296861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.302627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.302999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.303017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.309472] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.309838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.309856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.317003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.317385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.317403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.324627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.324985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.325003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.332128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.332495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.332514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.339633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.339904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.339923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.272 [2024-07-24 18:21:37.347418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.272 [2024-07-24 18:21:37.347803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.272 [2024-07-24 18:21:37.347822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.354913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.355286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.531 [2024-07-24 18:21:37.355304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.362288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.362672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.531 [2024-07-24 18:21:37.362691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.370358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.370725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.531 [2024-07-24 18:21:37.370743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.377761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.378147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.531 [2024-07-24 18:21:37.378165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.385424] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.385791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.531 [2024-07-24 18:21:37.385809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.393062] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.393424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.531 [2024-07-24 18:21:37.393443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.399823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.400184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.531 [2024-07-24 18:21:37.400202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.531 [2024-07-24 18:21:37.405773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.531 [2024-07-24 18:21:37.406131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.406152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.412771] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.413157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.413175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.419956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.420344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.420364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.428107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.428541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.428559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.435997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.436468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.436487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.443948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.444382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.444400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.451942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.452404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.452423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.458924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.459361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.459378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.466551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.467003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.467021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.473214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.473566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.473584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.479140] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.479506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.479525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.484631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.484977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.484995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.490209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.490552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.490570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.495567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.495906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.495924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.501938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.502281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.502299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.507220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.507552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.507570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.512382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.512732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.512760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.517361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.517705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.517724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.522313] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.522647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.522665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.528667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.529135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.529153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.536687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.537022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.537040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.543182] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.543607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.543625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.549458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.549837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.549855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.556470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.556845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.556864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.562890] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.563238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.563257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.568616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.532 [2024-07-24 18:21:37.568959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.532 [2024-07-24 18:21:37.568978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.532 [2024-07-24 18:21:37.573996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.574337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.574360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.578973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.579308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.579326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.583575] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.583929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.583947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.588130] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.588475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.592644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.593000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.593017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.597159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.597504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.597522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.601681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.602020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.602037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.606183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.606514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.606549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.533 [2024-07-24 18:21:37.610725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.533 [2024-07-24 18:21:37.611062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.533 [2024-07-24 18:21:37.611081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.792 [2024-07-24 18:21:37.615316] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.792 [2024-07-24 18:21:37.615669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.792 [2024-07-24 18:21:37.615688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.792 [2024-07-24 18:21:37.619900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.792 [2024-07-24 18:21:37.620239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.792 [2024-07-24 18:21:37.620257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.792 [2024-07-24 18:21:37.624381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.792 [2024-07-24 18:21:37.624716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.792 [2024-07-24 18:21:37.624734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.792 [2024-07-24 18:21:37.628884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.792 [2024-07-24 18:21:37.629223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.792 [2024-07-24 18:21:37.629241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.633421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.633765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.633784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.638003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.638338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.638356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.642510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.642857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.642875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.647046] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.647389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.647408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.651526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.651881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.651903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.655989] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.656328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.656346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.660474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.660823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.660840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.664958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.665291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.665310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.669443] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.669781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.669799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.673953] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.674289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.674307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.678503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.678848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.678866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.683036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.683372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.683390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.687505] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.687859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.687877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.692007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.692349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.692368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.696463] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.696827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.696845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.700932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.701260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.701279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.705401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.705747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.705765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.709921] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.710250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.710268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.714386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.714726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.714744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.718899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.719238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.719256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.723952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.724286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.724305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.729335] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.729663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.729681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.733975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.734313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.734331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.738559] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.738902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.743169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.743512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.743530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.747744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.748085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.748103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.793 [2024-07-24 18:21:37.752476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.793 [2024-07-24 18:21:37.752817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.793 [2024-07-24 18:21:37.752835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.756961] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.757283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.757302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.761458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.761807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.761825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.765947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.766283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.766302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.770486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.770833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.770855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.775007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.775339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.775358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.779564] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.779907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.779926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.784105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.784450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.784468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.788602] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.788950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.788968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.793128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.793473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.793497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.797642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.797982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.798000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.802200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.802536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.802554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.806718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.807056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.807074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.811276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.811638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.811656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.815814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.816143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.816161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.820264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.820617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.820636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.824775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.825111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.825129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.829249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.829591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.829610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.833783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.834127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.834145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.838331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.838677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.838696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.843328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.843680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.843699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.848030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.848370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.848388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.852596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.852940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.852958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.857198] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.857532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.857550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.861760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.862091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.862109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.866269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.866624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.866642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.794 [2024-07-24 18:21:37.870877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:44.794 [2024-07-24 18:21:37.871207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.794 [2024-07-24 18:21:37.871225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.875542] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.875898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.875916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.880245] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.880586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.880604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.885371] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.885715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.885733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.890174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.890520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.890540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.894722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.895063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.895081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.899312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.899660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.899678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.904015] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.904362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.904380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.909073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.909412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.909430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.913828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.914149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.914166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.918709] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.919051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.919069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.923987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.924330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.924348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.929846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.056 [2024-07-24 18:21:37.930181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.056 [2024-07-24 18:21:37.930198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.056 [2024-07-24 18:21:37.935787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.936137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.936155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.941021] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.941362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.941380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.946128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.946452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.946470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.951142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.951484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.951507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.955796] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.956131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.956149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.960358] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.960711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.960729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.964936] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.965269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.965287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.969739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.970076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.970094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.974743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.975054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.975074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.979270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.979605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.979623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.983826] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.984145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.984165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.988249] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.988574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.988592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.992712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.993035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.993053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:37.997176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:37.997496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:37.997515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.001582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.001894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.001911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.006001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.006316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.006333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.010430] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.010764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.010783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.014869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.015195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.015213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.019305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.019631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.019650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.023784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.024094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.024112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.028204] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.028525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.028543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.032624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.032934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.032952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.036997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.037319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.037337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.041458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.041781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.041799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.045874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.046190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.046208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.050284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.050617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.050635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.054702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.057 [2024-07-24 18:21:38.055007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.057 [2024-07-24 18:21:38.055025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.057 [2024-07-24 18:21:38.059104] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.059427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.059445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.063536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.063866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.063884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.067944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.068252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.068270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.072646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.072945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.072963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.078426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.078828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.078846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.085031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.085347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.085365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.091216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.091575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.091593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.097028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.097363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.097383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.102819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.103127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.103145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.108837] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.109155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.109174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.114515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.114864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.114881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.120360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.120698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.120716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.126081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.126421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.126439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.131877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.132213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.132231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.058 [2024-07-24 18:21:38.137013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.058 [2024-07-24 18:21:38.137403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.058 [2024-07-24 18:21:38.137421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.142479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.142809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.142827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.147011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.147333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.147352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.151478] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.151813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.151831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.155975] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.156280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.156298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.160404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.160704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.160722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.164811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.165126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.165144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.169957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.170268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.170285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.174791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.175098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.175116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.180156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.180449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.180466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.185023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.185323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.318 [2024-07-24 18:21:38.185341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.318 [2024-07-24 18:21:38.190577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.318 [2024-07-24 18:21:38.190877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.190895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.195822] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.196118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.196136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.201527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.201821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.201839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.206435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.206756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.206774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.210979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.211280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.211298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.215239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.215539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.215557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.219449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.219777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.219795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.223586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.223896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.223913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.227762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.228052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.228075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.231829] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.232105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.232123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.235841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.236138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.236156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.239866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.240156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.240174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.243935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.244196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.244213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.247683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.247948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.247966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.251343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.251597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.251616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.255089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.255346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.255365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.258778] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.259023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.259041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.262421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.262685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.262704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.266082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.266331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.266348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.269742] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.269998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.270016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.273482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.273748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.273766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.277406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.277657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.277675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.281164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.281413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.281431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.285060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.285321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.285340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.288979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.289233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.289251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.292711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.292953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.292971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.296395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.296666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.319 [2024-07-24 18:21:38.296685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.319 [2024-07-24 18:21:38.300101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.319 [2024-07-24 18:21:38.300349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.300367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.304083] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.304330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.304348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.308706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.308961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.308979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.314156] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.314460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.314478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.319110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.319359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.319377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.323633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.323877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.323896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.327974] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.328220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.328238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.332301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.332576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.332597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.336584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.336843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.336860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.341023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.341271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.341288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.345240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.345507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.345525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.349362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.349619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.349637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.354005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.354275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.354292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.359416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.359668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.359686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.364067] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.364309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.364327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.368266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.368520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.368538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.372565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.372831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.372848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.376836] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.377085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.377103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.381227] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.381473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.381496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.385405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.385662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.385680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.389655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.389904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.389922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.393982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.394241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.394259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.320 [2024-07-24 18:21:38.398642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.320 [2024-07-24 18:21:38.398936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.320 [2024-07-24 18:21:38.398953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.580 [2024-07-24 18:21:38.404529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.580 [2024-07-24 18:21:38.404839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.580 [2024-07-24 18:21:38.404857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.580 [2024-07-24 18:21:38.410292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.580 [2024-07-24 18:21:38.410611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.580 [2024-07-24 18:21:38.410633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.580 [2024-07-24 18:21:38.415690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.580 [2024-07-24 18:21:38.415980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.580 [2024-07-24 18:21:38.415998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.580 [2024-07-24 18:21:38.421726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.580 [2024-07-24 18:21:38.421977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.580 [2024-07-24 18:21:38.421995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.580 [2024-07-24 18:21:38.426749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.580 [2024-07-24 18:21:38.427004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.427021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.430934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.431191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.431209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.435184] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.435443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.435461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.439733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.439996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.440014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.443986] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.444240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.444258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.448030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.448286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.448304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.452261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.452530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.452548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.457375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.457634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.457652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.462326] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.462602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.462621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.467238] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.467530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.467548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.471977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.472224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.472242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.477236] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.477479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.477502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.482769] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.483014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.483032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.488052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.488315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.488332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.493522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.493795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.493813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.498136] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.498411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.498429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.502331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.502588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.502606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.506194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.506441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.506459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.510167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.510422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.510440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.514051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.514301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.514319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.517942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.518202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.518219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.522440] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.522701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.522719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.526598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.526841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.526859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.530459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.530698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.530719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.534265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.534511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.534529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.538011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.538233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.538251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.542123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.542360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.581 [2024-07-24 18:21:38.542378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.581 [2024-07-24 18:21:38.547128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.581 [2024-07-24 18:21:38.547351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.547369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.552399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.552670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.552688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.556820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.557079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.557097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.561123] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.561369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.561387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.565263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.565519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.565537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.569391] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.569651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.569669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.574011] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.574249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.574268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.578085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.578338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.578356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.581987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.582225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.582243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.585808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.586047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.586065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.590078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.590305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.590322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.593959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.594180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.594198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.597758] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.598017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.598035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.601570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.601807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.601825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.605610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.605860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.605879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.610360] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.610676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.610694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.615656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.615925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.615943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.619994] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.620293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.620311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.625030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.625284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.625303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.629950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.630186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.630204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.635640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.635925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.635943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.641663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.641909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.641927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.646149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.646375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.646396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.650657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.650890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.650908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.655957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.656181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.656199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.582 [2024-07-24 18:21:38.660261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.582 [2024-07-24 18:21:38.660504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.582 [2024-07-24 18:21:38.660522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.664246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.664488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.664512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.668203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.668455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.668473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.672103] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.672351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.672370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.675931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.676189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.676207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.679861] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.680111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.680129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.683987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.684228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.684246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.688025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.688255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.688273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.692520] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.842 [2024-07-24 18:21:38.692773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.842 [2024-07-24 18:21:38.692791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.842 [2024-07-24 18:21:38.697651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.697923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.697941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.702582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.702822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.702840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.707162] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.707385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.707402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.711434] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.711668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.711688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.715853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.716111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.716129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.719653] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.719904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.719925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.723509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.723734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.723752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.727317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.727551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.727570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.731133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.731356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.731374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.735093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.735308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.735326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.739484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.739717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.739734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.745279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.745513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.745531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.749682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.749934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.749952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.753843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.754071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.754089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.757972] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.758199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.758217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.762348] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.762652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.762670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.768317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.768671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.768689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.774366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.774635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.774653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.779261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.779530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.779548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.783786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.784054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.784072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.788927] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.789247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.789265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.795036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.795344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.795362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.800920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.801181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.801199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.808145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.808425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.808443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.815114] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.815290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.815307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.822005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.822280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.822298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.827816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.843 [2024-07-24 18:21:38.828072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.843 [2024-07-24 18:21:38.828091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.843 [2024-07-24 18:21:38.833698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.833985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.834003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.840216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.840518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.840536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.844718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.844962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.844980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.848685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.848944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.848962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.852639] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.852888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.852909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.856427] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.856685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.856703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.860289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.860557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.860575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.864128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.864373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.864391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.867951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.868177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.868195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.871784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.872032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.872051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.875597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.875837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.875855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.879428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.879671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.879689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.883257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.883488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.883512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.887068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.887308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.887326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.890876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.891100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.891118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.894952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.895165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.895183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.900286] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.900570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.900589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.905153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.905387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.905406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.909398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.909661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.909680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.913741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.913981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.913999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.917736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.918001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.918019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.844 [2024-07-24 18:21:38.922392] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:45.844 [2024-07-24 18:21:38.922706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.844 [2024-07-24 18:21:38.922724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.927680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.928056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.928075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.932652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.932951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.932970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.938098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.938341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.938359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.943112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.943477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.943500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.948328] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.948572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.948590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.953343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.953590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.953609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.958622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.958909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.958927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.964157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.964415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.964434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.969605] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.969838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.969859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.974793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.975052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.975070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.979733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.979973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.979991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.983956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.984188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.984206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.987859] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.988116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.988135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.991848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.992079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.992098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.995736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.995974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.995992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:38.999666] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:38.999904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:38.999922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:39.003610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:39.003861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:39.003879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:39.007541] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.104 [2024-07-24 18:21:39.007798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.104 [2024-07-24 18:21:39.007816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.104 [2024-07-24 18:21:39.011464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.011713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.011731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.015375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.015602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.015620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.019241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.019479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.019503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.023108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.023348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.023366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.026956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.027182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.027199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.030870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.031103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.031121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.034808] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.035055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.035073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.038684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.038926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.038947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.042566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.042821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.042838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.046445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.046712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.046730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.050337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.050584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.050601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.054260] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.054488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.054510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.058159] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.058400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.058417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.062051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.062293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.062311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.065928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.066155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.066173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.069809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.070044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.070062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.073661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.073900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.073919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.077475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.077737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.077755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.081479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.081720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.081738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.086081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.086311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.086329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.091018] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.091264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.091282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.095376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.095633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.095651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.099610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.099829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.099846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.104003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.104222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.104240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.108373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.108602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.108621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.112377] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.112630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.105 [2024-07-24 18:21:39.112648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.105 [2024-07-24 18:21:39.116232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.105 [2024-07-24 18:21:39.116466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.116484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.120053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.120290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.120308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.123913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.124142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.124160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.127751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.127997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.128015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.131703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.131954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.131972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.135847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.136065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.136083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.140003] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.140249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.140267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.144301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.144557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.144578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.148645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.148868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.148886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.152923] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.153167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.153185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.157179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.157396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.157414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.161689] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.161927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.161945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.106 [2024-07-24 18:21:39.165885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x22dc0a0) with pdu=0x2000190fef90 00:26:46.106 [2024-07-24 18:21:39.166114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.106 [2024-07-24 18:21:39.166133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.106 00:26:46.106 Latency(us) 00:26:46.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.106 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:46.106 nvme0n1 : 2.00 6498.07 812.26 0.00 0.00 2458.07 1419.95 8301.23 00:26:46.106 =================================================================================================================== 00:26:46.106 Total : 6498.07 812.26 0.00 0.00 2458.07 1419.95 8301.23 00:26:46.106 0 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:46.364 | .driver_specific 00:26:46.364 | .nvme_error 00:26:46.364 | .status_code 00:26:46.364 | .command_transient_transport_error' 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 419 > 0 )) 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3552355 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3552355 ']' 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3552355 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3552355 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3552355' 00:26:46.364 killing process with pid 3552355 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3552355 00:26:46.364 Received shutdown signal, test time was about 2.000000 seconds 00:26:46.364 00:26:46.364 Latency(us) 00:26:46.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.364 =================================================================================================================== 00:26:46.364 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:46.364 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3552355 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3550229 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 3550229 ']' 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 3550229 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3550229 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3550229' 00:26:46.623 killing process with pid 3550229 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 3550229 00:26:46.623 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 3550229 00:26:46.881 00:26:46.881 real 0m16.829s 00:26:46.881 user 0m31.959s 00:26:46.881 sys 0m4.694s 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:46.881 ************************************ 00:26:46.881 END TEST nvmf_digest_error 00:26:46.881 ************************************ 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:46.881 rmmod nvme_tcp 00:26:46.881 rmmod nvme_fabrics 00:26:46.881 rmmod nvme_keyring 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3550229 ']' 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3550229 00:26:46.881 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 3550229 ']' 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 3550229 00:26:46.882 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3550229) - No such process 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 3550229 is not found' 00:26:46.882 Process with pid 3550229 is not found 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:46.882 18:21:39 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.412 18:21:41 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.412 00:26:49.412 real 0m41.912s 00:26:49.412 user 1m5.992s 00:26:49.412 sys 0m13.721s 00:26:49.412 18:21:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:49.412 18:21:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:49.412 ************************************ 00:26:49.412 END TEST nvmf_digest 00:26:49.412 ************************************ 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.412 ************************************ 00:26:49.412 START TEST nvmf_bdevperf 00:26:49.412 ************************************ 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:49.412 * Looking for test storage... 00:26:49.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.412 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.413 18:21:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.674 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:54.675 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:54.675 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:54.675 Found net devices under 0000:86:00.0: cvl_0_0 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:54.675 Found net devices under 0000:86:00.1: cvl_0_1 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:54.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:54.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:26:54.675 00:26:54.675 --- 10.0.0.2 ping statistics --- 00:26:54.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.675 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:54.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:54.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:26:54.675 00:26:54.675 --- 10.0.0.1 ping statistics --- 00:26:54.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:54.675 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:54.675 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3556500 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3556500 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3556500 ']' 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.933 18:21:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:54.933 [2024-07-24 18:21:47.817077] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:54.933 [2024-07-24 18:21:47.817124] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:54.933 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.933 [2024-07-24 18:21:47.875063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:54.933 [2024-07-24 18:21:47.953500] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:54.933 [2024-07-24 18:21:47.953535] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:54.933 [2024-07-24 18:21:47.953542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:54.933 [2024-07-24 18:21:47.953548] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:54.933 [2024-07-24 18:21:47.953553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:54.933 [2024-07-24 18:21:47.953652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:54.933 [2024-07-24 18:21:47.953758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:54.933 [2024-07-24 18:21:47.953759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.866 [2024-07-24 18:21:48.660302] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.866 Malloc0 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.866 [2024-07-24 18:21:48.731207] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:55.866 { 00:26:55.866 "params": { 00:26:55.866 "name": "Nvme$subsystem", 00:26:55.866 "trtype": "$TEST_TRANSPORT", 00:26:55.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.866 "adrfam": "ipv4", 00:26:55.866 "trsvcid": "$NVMF_PORT", 00:26:55.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.866 "hdgst": ${hdgst:-false}, 00:26:55.866 "ddgst": ${ddgst:-false} 00:26:55.866 }, 00:26:55.866 "method": "bdev_nvme_attach_controller" 00:26:55.866 } 00:26:55.866 EOF 00:26:55.866 )") 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:55.866 18:21:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:55.866 "params": { 00:26:55.866 "name": "Nvme1", 00:26:55.866 "trtype": "tcp", 00:26:55.866 "traddr": "10.0.0.2", 00:26:55.866 "adrfam": "ipv4", 00:26:55.866 "trsvcid": "4420", 00:26:55.866 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:55.866 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:55.866 "hdgst": false, 00:26:55.866 "ddgst": false 00:26:55.866 }, 00:26:55.866 "method": "bdev_nvme_attach_controller" 00:26:55.866 }' 00:26:55.866 [2024-07-24 18:21:48.779833] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:55.866 [2024-07-24 18:21:48.779879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3556603 ] 00:26:55.866 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.866 [2024-07-24 18:21:48.833871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.866 [2024-07-24 18:21:48.907080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.429 Running I/O for 1 seconds... 00:26:57.386 00:26:57.386 Latency(us) 00:26:57.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.386 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.386 Verification LBA range: start 0x0 length 0x4000 00:26:57.386 Nvme1n1 : 1.01 10947.61 42.76 0.00 0.00 11650.19 2496.61 13044.78 00:26:57.386 =================================================================================================================== 00:26:57.386 Total : 10947.61 42.76 0.00 0.00 11650.19 2496.61 13044.78 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3556864 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.386 { 00:26:57.386 "params": { 00:26:57.386 "name": "Nvme$subsystem", 00:26:57.386 "trtype": "$TEST_TRANSPORT", 00:26:57.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.386 "adrfam": "ipv4", 00:26:57.386 "trsvcid": "$NVMF_PORT", 00:26:57.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.386 "hdgst": ${hdgst:-false}, 00:26:57.386 "ddgst": ${ddgst:-false} 00:26:57.386 }, 00:26:57.386 "method": "bdev_nvme_attach_controller" 00:26:57.386 } 00:26:57.386 EOF 00:26:57.386 )") 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:57.386 18:21:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:57.386 "params": { 00:26:57.386 "name": "Nvme1", 00:26:57.386 "trtype": "tcp", 00:26:57.386 "traddr": "10.0.0.2", 00:26:57.386 "adrfam": "ipv4", 00:26:57.386 "trsvcid": "4420", 00:26:57.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.386 "hdgst": false, 00:26:57.386 "ddgst": false 00:26:57.386 }, 00:26:57.386 "method": "bdev_nvme_attach_controller" 00:26:57.386 }' 00:26:57.386 [2024-07-24 18:21:50.462847] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:26:57.386 [2024-07-24 18:21:50.462895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3556864 ] 00:26:57.645 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.645 [2024-07-24 18:21:50.517846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.645 [2024-07-24 18:21:50.590474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.904 Running I/O for 15 seconds... 00:27:00.439 18:21:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3556500 00:27:00.439 18:21:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:00.439 [2024-07-24 18:21:53.433317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:104248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:104256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:104264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:104272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:104280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:104288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:104296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:104304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:104320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:104328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:104336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:104344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:105144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:105232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.439 [2024-07-24 18:21:53.433835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:104376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:104384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:104392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:104400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:104408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:104416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:104432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:104440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:104456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.433989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.433996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:104464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:104472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:104488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:104496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:104504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:104512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:104520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:104536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:104552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:104560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:104568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:104576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:104592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:104608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:104632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:104640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:104648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:104656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:104664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:104672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.439 [2024-07-24 18:21:53.434367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.439 [2024-07-24 18:21:53.434375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:104680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:104688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:104704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:104712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:104728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:104736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:104744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:104752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:104760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:104768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:104784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:104792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:104800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:104808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:104816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:104824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:104832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:104840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:104848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:104856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:104864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:104872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:104880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:104896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:104904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:104920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:104928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:104936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.440 [2024-07-24 18:21:53.434944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.440 [2024-07-24 18:21:53.434958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:104944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.434993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:104960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.434999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:104968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:104976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:104984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:104992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.440 [2024-07-24 18:21:53.435291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2717ee0 is same with the state(5) to be set 00:27:00.440 [2024-07-24 18:21:53.435306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:00.440 [2024-07-24 18:21:53.435312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:00.440 [2024-07-24 18:21:53.435318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105128 len:8 PRP1 0x0 PRP2 0x0 00:27:00.440 [2024-07-24 18:21:53.435324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.440 [2024-07-24 18:21:53.435366] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2717ee0 was disconnected and freed. reset controller. 00:27:00.440 [2024-07-24 18:21:53.438150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-24 18:21:53.438199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.440 [2024-07-24 18:21:53.438694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-24 18:21:53.438711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-24 18:21:53.438718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.440 [2024-07-24 18:21:53.438891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.440 [2024-07-24 18:21:53.439063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-24 18:21:53.439071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-24 18:21:53.439078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-24 18:21:53.441834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-24 18:21:53.451265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-24 18:21:53.451725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-24 18:21:53.451771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-24 18:21:53.451794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.440 [2024-07-24 18:21:53.452362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.440 [2024-07-24 18:21:53.452543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-24 18:21:53.452551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-24 18:21:53.452558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-24 18:21:53.455208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-24 18:21:53.464131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-24 18:21:53.464559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-24 18:21:53.464576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-24 18:21:53.464587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.440 [2024-07-24 18:21:53.464758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.440 [2024-07-24 18:21:53.464917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-24 18:21:53.464924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-24 18:21:53.464930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-24 18:21:53.467622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-24 18:21:53.477049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-24 18:21:53.477481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-24 18:21:53.477502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-24 18:21:53.477509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.440 [2024-07-24 18:21:53.477696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.440 [2024-07-24 18:21:53.477876] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-24 18:21:53.477884] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-24 18:21:53.477890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-24 18:21:53.480499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-24 18:21:53.489861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-24 18:21:53.490201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-24 18:21:53.490217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.440 [2024-07-24 18:21:53.490224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.440 [2024-07-24 18:21:53.490390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.440 [2024-07-24 18:21:53.490561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.440 [2024-07-24 18:21:53.490570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.440 [2024-07-24 18:21:53.490576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.440 [2024-07-24 18:21:53.493119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.440 [2024-07-24 18:21:53.502749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.440 [2024-07-24 18:21:53.503043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.440 [2024-07-24 18:21:53.503059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-24 18:21:53.503066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.441 [2024-07-24 18:21:53.503233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.441 [2024-07-24 18:21:53.503399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-24 18:21:53.503411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-24 18:21:53.503416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-24 18:21:53.506026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.441 [2024-07-24 18:21:53.515532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.441 [2024-07-24 18:21:53.515904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.441 [2024-07-24 18:21:53.515919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.441 [2024-07-24 18:21:53.515926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.441 [2024-07-24 18:21:53.516098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.441 [2024-07-24 18:21:53.516270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.441 [2024-07-24 18:21:53.516278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.441 [2024-07-24 18:21:53.516284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.441 [2024-07-24 18:21:53.519025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.699 [2024-07-24 18:21:53.528546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.699 [2024-07-24 18:21:53.528913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.699 [2024-07-24 18:21:53.528928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.699 [2024-07-24 18:21:53.528935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.699 [2024-07-24 18:21:53.529102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.699 [2024-07-24 18:21:53.529271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.699 [2024-07-24 18:21:53.529279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.699 [2024-07-24 18:21:53.529285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.699 [2024-07-24 18:21:53.531978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.699 [2024-07-24 18:21:53.541345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.699 [2024-07-24 18:21:53.541803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.699 [2024-07-24 18:21:53.541846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.699 [2024-07-24 18:21:53.541868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.699 [2024-07-24 18:21:53.542446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.699 [2024-07-24 18:21:53.542889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.699 [2024-07-24 18:21:53.542897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.699 [2024-07-24 18:21:53.542903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.699 [2024-07-24 18:21:53.545514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.699 [2024-07-24 18:21:53.554110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.699 [2024-07-24 18:21:53.554524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.699 [2024-07-24 18:21:53.554540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.699 [2024-07-24 18:21:53.554547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.699 [2024-07-24 18:21:53.554714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.699 [2024-07-24 18:21:53.554880] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.699 [2024-07-24 18:21:53.554888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.699 [2024-07-24 18:21:53.554894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.699 [2024-07-24 18:21:53.557504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.699 [2024-07-24 18:21:53.567050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.699 [2024-07-24 18:21:53.567510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.699 [2024-07-24 18:21:53.567526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.699 [2024-07-24 18:21:53.567533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.699 [2024-07-24 18:21:53.567699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.699 [2024-07-24 18:21:53.567866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.699 [2024-07-24 18:21:53.567874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.699 [2024-07-24 18:21:53.567879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.699 [2024-07-24 18:21:53.570495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.699 [2024-07-24 18:21:53.579970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.699 [2024-07-24 18:21:53.580435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.699 [2024-07-24 18:21:53.580477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.699 [2024-07-24 18:21:53.580512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.580976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.581144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.581152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.581157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.583855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.592813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.593106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.593141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.593171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.593727] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.593896] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.593904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.593910] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.596545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.605671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.606056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.606072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.606079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.606245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.606413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.606421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.606426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.609097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.618606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.618983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.618998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.619005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.619171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.619343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.619351] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.619357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.622035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.631479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.631918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.631934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.631942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.632109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.632277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.632286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.632295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.634942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.644443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.644827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.644870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.644893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.645472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.646061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.646069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.646076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.648685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.657258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.657616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.657632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.657640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.657807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.657973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.657981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.657988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.660618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.670217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.670595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.670612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.670619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.670796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.670964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.670971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.670977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.673644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.683176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.683565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.683581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.683588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.683754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.683921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.683929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.683935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.686683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.696250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.696592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.696608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.696615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.696787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.696964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.696972] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.696978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.699735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.709304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.709646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.709663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.709670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.709841] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.710012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.710020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.710026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.712757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.722252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.722654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.722669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.722676] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.722846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.723011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.723019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.723025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.725756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.735108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.735535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.735551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.735557] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.735729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.735888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.735896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.735901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.738518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.747933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.748269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.748311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.748333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.748927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.749457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.749469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.749478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.753921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.761646] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.762085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.762127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.762148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.762587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.762770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.762778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.762788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.765701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.700 [2024-07-24 18:21:53.774384] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.700 [2024-07-24 18:21:53.774746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.700 [2024-07-24 18:21:53.774762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.700 [2024-07-24 18:21:53.774769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.700 [2024-07-24 18:21:53.774936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.700 [2024-07-24 18:21:53.775102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.700 [2024-07-24 18:21:53.775110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.700 [2024-07-24 18:21:53.775115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.700 [2024-07-24 18:21:53.777842] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.787391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.787828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.787843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.787850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.788017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.788183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.788191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.788197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.790923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.800211] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.800637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.800653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.800660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.800827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.800993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.801001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.801007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.803663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.813042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.813454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.813513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.813536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.814115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.814613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.814621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.814627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.817215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.825882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.826286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.826301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.826307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.826464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.826650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.826659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.826664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.829267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.838594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.839035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.839076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.839098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.839642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.839810] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.839818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.839824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.842429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.851361] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.851765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.851781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.851787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.851954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.852125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.852133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.852139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.854752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.864133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.864572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.864615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.864637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.865216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.865663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.865671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.865676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.868301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.876890] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.877256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.877272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.877278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.877445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.877617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.877625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.877631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.880272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.889848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.890290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.890305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.890312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.890478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.890650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.890658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.890664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.893365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.902965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.903384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.960 [2024-07-24 18:21:53.903400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.960 [2024-07-24 18:21:53.903407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.960 [2024-07-24 18:21:53.903597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.960 [2024-07-24 18:21:53.903777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.960 [2024-07-24 18:21:53.903784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.960 [2024-07-24 18:21:53.903790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.960 [2024-07-24 18:21:53.906391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.960 [2024-07-24 18:21:53.915772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.960 [2024-07-24 18:21:53.916213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:53.916257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:53.916279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:53.916776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:53.916944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:53.916952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:53.916957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:53.919563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:53.928568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:53.928967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:53.928982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:53.928989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:53.929148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:53.929305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:53.929313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:53.929318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:53.931952] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:53.941356] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:53.941809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:53.941825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:53.941835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:53.942002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:53.942169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:53.942177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:53.942182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:53.944940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:53.954270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:53.954698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:53.954714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:53.954732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:53.954898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:53.955064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:53.955072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:53.955078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:53.957782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:53.967326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:53.967702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:53.967718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:53.967725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:53.967896] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:53.968067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:53.968076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:53.968082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:53.970794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:53.980226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:53.980635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:53.980651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:53.980658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:53.980825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:53.980991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:53.981004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:53.981010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:53.983643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:53.993165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:53.993631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:53.993647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:53.993654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:53.993820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:53.993987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:53.993995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:53.994000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:53.996688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:54.005993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:54.006407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:54.006449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:54.006471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:54.006951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:54.007119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:54.007127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:54.007132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:54.009806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:54.018884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:54.019236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:54.019251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:54.019257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:54.019415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:54.019599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:54.019607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:54.019613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:54.022216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.961 [2024-07-24 18:21:54.031711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.961 [2024-07-24 18:21:54.032139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.961 [2024-07-24 18:21:54.032172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:00.961 [2024-07-24 18:21:54.032196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:00.961 [2024-07-24 18:21:54.032791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:00.961 [2024-07-24 18:21:54.032964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.961 [2024-07-24 18:21:54.032971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.961 [2024-07-24 18:21:54.032977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.961 [2024-07-24 18:21:54.035651] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.220 [2024-07-24 18:21:54.044709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.220 [2024-07-24 18:21:54.045142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.220 [2024-07-24 18:21:54.045157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.220 [2024-07-24 18:21:54.045163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.220 [2024-07-24 18:21:54.045345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.220 [2024-07-24 18:21:54.045516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.220 [2024-07-24 18:21:54.045541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.220 [2024-07-24 18:21:54.045547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.220 [2024-07-24 18:21:54.048299] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.057443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.057882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.057925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.057947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.058536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.059039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.059047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.059053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.061625] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.070178] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.070609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.070625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.070631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.070792] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.070949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.070957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.070962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.073581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.082952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.083383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.083398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.083404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.083587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.083754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.083762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.083768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.086471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.095786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.096225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.096266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.096288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.096824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.096992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.096999] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.097005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.099618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.108567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.109017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.109033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.109040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.109206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.109372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.109380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.109389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.111999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.121321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.121756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.121800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.121821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.122353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.122526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.122534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.122540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.125136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.134074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.134430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.134445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.134452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.134637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.134804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.134811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.134817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.137421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.146805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.147232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.147264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.147288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.147881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.148141] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.148149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.148156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.150760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.159520] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.159950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.159964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.159971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.160128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.160285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.160293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.160298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.221 [2024-07-24 18:21:54.162914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.221 [2024-07-24 18:21:54.172336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.221 [2024-07-24 18:21:54.172780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.221 [2024-07-24 18:21:54.172796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.221 [2024-07-24 18:21:54.172803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.221 [2024-07-24 18:21:54.172969] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.221 [2024-07-24 18:21:54.173135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.221 [2024-07-24 18:21:54.173143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.221 [2024-07-24 18:21:54.173149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.175769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.185188] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.185589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.185605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.185612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.185769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.185926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.185933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.185940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.188528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.197906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.198259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.198275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.198281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.198451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.198647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.198655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.198661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.201404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.210918] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.211301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.211316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.211323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.211500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.211671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.211679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.211685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.214360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.223651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.224052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.224067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.224073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.224240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.224406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.224414] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.224419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.227070] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.236428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.236863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.236878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.236884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.237042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.237199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.237207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.237215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.239833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.249246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.249573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.249588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.249594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.249753] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.249910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.249917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.249923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.252540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.261957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.262300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.262315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.262322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.262488] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.262661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.262669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.262674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.265279] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.274704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.275126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.275142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.275149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.275315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.275481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.275489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.275502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.278414] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.287497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.287969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.288020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.288042] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.288570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.288737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.288745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.288751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.222 [2024-07-24 18:21:54.291417] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.222 [2024-07-24 18:21:54.300441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.222 [2024-07-24 18:21:54.300864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.222 [2024-07-24 18:21:54.300880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.222 [2024-07-24 18:21:54.300887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.222 [2024-07-24 18:21:54.301058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.222 [2024-07-24 18:21:54.301229] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.222 [2024-07-24 18:21:54.301237] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.222 [2024-07-24 18:21:54.301243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.303989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.313213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.313649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.313664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.313671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.313829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.313987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.313994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.313999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.316618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.326131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.326569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.326611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.326633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.327211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.327524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.327533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.327539] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.330140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.338985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.339412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.339428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.339434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.339617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.339784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.339792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.339798] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.342400] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.351779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.352235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.352250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.352257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.352423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.352595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.352603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.352609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.355208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.364595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.365022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.365037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.365044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.365210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.365377] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.365384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.365390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.368082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.377360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.377787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.377803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.377809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.377976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.378142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.378150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.378156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.380773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.390079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.390519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.390561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.390583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.391114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.391281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.391289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.391294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.393908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.402888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.403291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.403307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.403313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.403480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.403671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.403680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.403686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.406315] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.415690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.416118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.416134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.416144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.416310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.416477] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.416485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.416497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.419101] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.428499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.428919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.428935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.428942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.429108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.429275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.429282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.429288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.431959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.441212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.441574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.441589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.441595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.441762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.441928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.441935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.441940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.444645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.453952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.454377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.480 [2024-07-24 18:21:54.454393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.480 [2024-07-24 18:21:54.454400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.480 [2024-07-24 18:21:54.454577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.480 [2024-07-24 18:21:54.454749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.480 [2024-07-24 18:21:54.454760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.480 [2024-07-24 18:21:54.454765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.480 [2024-07-24 18:21:54.457606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.480 [2024-07-24 18:21:54.466970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.480 [2024-07-24 18:21:54.467385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.467401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.467408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.467585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.467758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.467766] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.467772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.470446] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.481 [2024-07-24 18:21:54.479954] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.481 [2024-07-24 18:21:54.480363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.480379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.480385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.480574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.480746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.480754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.480760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.483461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.481 [2024-07-24 18:21:54.492707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.481 [2024-07-24 18:21:54.493144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.493181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.493204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.493796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.493986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.493994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.494000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.496606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.481 [2024-07-24 18:21:54.505454] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.481 [2024-07-24 18:21:54.505890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.505933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.505955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.506397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.506579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.506588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.506594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.509306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.481 [2024-07-24 18:21:54.518337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.481 [2024-07-24 18:21:54.518793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.518808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.518815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.518981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.519147] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.519155] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.519161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.521783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.481 [2024-07-24 18:21:54.531191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.481 [2024-07-24 18:21:54.531624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.531639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.531646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.531803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.531961] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.531969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.531975] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.534641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.481 [2024-07-24 18:21:54.543913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.481 [2024-07-24 18:21:54.544302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.544344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.544365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.544975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.545148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.545156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.545162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.547793] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.481 [2024-07-24 18:21:54.556707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.481 [2024-07-24 18:21:54.557130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.481 [2024-07-24 18:21:54.557145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.481 [2024-07-24 18:21:54.557152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.481 [2024-07-24 18:21:54.557310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.481 [2024-07-24 18:21:54.557467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.481 [2024-07-24 18:21:54.557475] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.481 [2024-07-24 18:21:54.557480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.481 [2024-07-24 18:21:54.560228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.739 [2024-07-24 18:21:54.569648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.739 [2024-07-24 18:21:54.570074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.739 [2024-07-24 18:21:54.570089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.739 [2024-07-24 18:21:54.570096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.739 [2024-07-24 18:21:54.570262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.739 [2024-07-24 18:21:54.570428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.739 [2024-07-24 18:21:54.570436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.739 [2024-07-24 18:21:54.570442] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.739 [2024-07-24 18:21:54.573130] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.739 [2024-07-24 18:21:54.582439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.739 [2024-07-24 18:21:54.582870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.739 [2024-07-24 18:21:54.582885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.739 [2024-07-24 18:21:54.582892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.739 [2024-07-24 18:21:54.583049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.739 [2024-07-24 18:21:54.583206] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.739 [2024-07-24 18:21:54.583213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.739 [2024-07-24 18:21:54.583222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.739 [2024-07-24 18:21:54.585827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.739 [2024-07-24 18:21:54.595207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.739 [2024-07-24 18:21:54.595645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.595688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.595710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.596294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.596452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.596459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.596465] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.599209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.608038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.608471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.608487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.608499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.609045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.609240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.609247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.609253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.611858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.620856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.621289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.621305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.621312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.621479] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.621651] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.621660] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.621667] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.624329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.633773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.634141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.634156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.634162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.634328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.634499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.634507] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.634513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.637163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.646679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.647122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.647138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.647144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.647311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.647478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.647485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.647496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.650147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.659572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.659950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.659965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.659972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.660139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.660306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.660313] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.660319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.662979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.672468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.672914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.672929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.672936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.673103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.673272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.673280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.673286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.675902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.685268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.685605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.685648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.685671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.686249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.686687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.686696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.686702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.689306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.698032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.698380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.698422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.698444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.699036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.699342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.699355] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.699365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.703823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.711774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.712086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.712103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.740 [2024-07-24 18:21:54.712110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.740 [2024-07-24 18:21:54.712293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.740 [2024-07-24 18:21:54.712475] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.740 [2024-07-24 18:21:54.712483] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.740 [2024-07-24 18:21:54.712496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.740 [2024-07-24 18:21:54.715421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.740 [2024-07-24 18:21:54.724762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.740 [2024-07-24 18:21:54.725181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.740 [2024-07-24 18:21:54.725197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.725204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.725377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.725556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.725565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.725571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.728292] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.741 [2024-07-24 18:21:54.737594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.741 [2024-07-24 18:21:54.738508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.741 [2024-07-24 18:21:54.738531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.738540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.738714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.738881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.738889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.738896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.741506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.741 [2024-07-24 18:21:54.750547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.741 [2024-07-24 18:21:54.750993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.741 [2024-07-24 18:21:54.751037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.751060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.751655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.751897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.751906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.751911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.754599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.741 [2024-07-24 18:21:54.763564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.741 [2024-07-24 18:21:54.763937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.741 [2024-07-24 18:21:54.763956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.763963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.764131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.764302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.764310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.764316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.766968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.741 [2024-07-24 18:21:54.776406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.741 [2024-07-24 18:21:54.776879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.741 [2024-07-24 18:21:54.776895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.776902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.777068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.777235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.777243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.777249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.779921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.741 [2024-07-24 18:21:54.789498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.741 [2024-07-24 18:21:54.789865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.741 [2024-07-24 18:21:54.789881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.789888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.790060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.790235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.790243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.790249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.793030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.741 [2024-07-24 18:21:54.802939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.741 [2024-07-24 18:21:54.803394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.741 [2024-07-24 18:21:54.803411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.803419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.803618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.803817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.803826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.803833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.806987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.741 [2024-07-24 18:21:54.816679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.741 [2024-07-24 18:21:54.817065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.741 [2024-07-24 18:21:54.817083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:01.741 [2024-07-24 18:21:54.817091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:01.741 [2024-07-24 18:21:54.817298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:01.741 [2024-07-24 18:21:54.817513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.741 [2024-07-24 18:21:54.817522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.741 [2024-07-24 18:21:54.817530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.741 [2024-07-24 18:21:54.820865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.000 [2024-07-24 18:21:54.830263] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.000 [2024-07-24 18:21:54.830739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.000 [2024-07-24 18:21:54.830758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.000 [2024-07-24 18:21:54.830766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.000 [2024-07-24 18:21:54.830972] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.000 [2024-07-24 18:21:54.831166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.000 [2024-07-24 18:21:54.831175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.000 [2024-07-24 18:21:54.831182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.000 [2024-07-24 18:21:54.834368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.000 [2024-07-24 18:21:54.843745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.000 [2024-07-24 18:21:54.844235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.000 [2024-07-24 18:21:54.844253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.000 [2024-07-24 18:21:54.844261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.844469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.844685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.844694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.844702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.847951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.857256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.857658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.857676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.857684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.857894] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.858089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.858097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.858104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.861218] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.870561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.870867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.870883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.870891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.871072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.871254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.871263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.871269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.874210] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.884023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.884415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.884432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.884439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.884639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.884833] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.884842] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.884849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.887965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.897534] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.898000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.898017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.898029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.898238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.898446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.898455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.898463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.901807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.911031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.911510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.911529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.911537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.911732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.911927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.911936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.911943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.915054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.924427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.924903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.924921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.924929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.925140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.925348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.925358] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.925365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.928536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.937825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.938292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.938309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.938317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.938516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.938711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.938723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.938730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.941843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.951197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.951636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.951654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.951662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.951856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.952051] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.952060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.952067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.955187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.964391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.964768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.964784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.964792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.964973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.965156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.965165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.965172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.968144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.977503] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.977800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.977816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.977847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.978391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.978569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.978578] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.978584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.981331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:54.990417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:54.990846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:54.990888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:54.990910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:54.991352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:54.991541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:54.991550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:54.991556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:54.994198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:55.003254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:55.003560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:55.003576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:55.003583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:55.003751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:55.003917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:55.003925] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:55.003930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:55.006588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:55.016122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:55.016475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:55.016495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:55.016502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:55.016669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:55.016835] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:55.016843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:55.016849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:55.019455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:55.028990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:55.029289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:55.029304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:55.029311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:55.029481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:55.029653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:55.029662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:55.029668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:55.032310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:55.041809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:55.042186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:55.042202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:55.042209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:55.042375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:55.042546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:55.042554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:55.042560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:55.045227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:55.054582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:55.054947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:55.054963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:55.054969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:55.055136] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:55.055302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:55.055309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.001 [2024-07-24 18:21:55.055315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.001 [2024-07-24 18:21:55.057924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.001 [2024-07-24 18:21:55.067444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.001 [2024-07-24 18:21:55.067905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.001 [2024-07-24 18:21:55.067920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.001 [2024-07-24 18:21:55.067927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.001 [2024-07-24 18:21:55.068093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.001 [2024-07-24 18:21:55.068259] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.001 [2024-07-24 18:21:55.068267] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.002 [2024-07-24 18:21:55.068276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.002 [2024-07-24 18:21:55.070930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.002 [2024-07-24 18:21:55.080460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.002 [2024-07-24 18:21:55.080876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.002 [2024-07-24 18:21:55.080892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.002 [2024-07-24 18:21:55.080899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.002 [2024-07-24 18:21:55.081070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.002 [2024-07-24 18:21:55.081241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.002 [2024-07-24 18:21:55.081249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.002 [2024-07-24 18:21:55.081255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.084006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.093315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.093756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.093799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.093820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 18:21:55.094349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 18:21:55.094521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 18:21:55.094529] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 18:21:55.094536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.097127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.106099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.106504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.106520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.106526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 18:21:55.106684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 18:21:55.106841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 18:21:55.106849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 18:21:55.106854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.109614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.118904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.119349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.119389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.119413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 18:21:55.120007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 18:21:55.120599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 18:21:55.120625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 18:21:55.120652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.123253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.131643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.132046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.132061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.132067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 18:21:55.132225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 18:21:55.132382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 18:21:55.132390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 18:21:55.132395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.135091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.144455] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.144875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.144891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.144898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 18:21:55.145056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 18:21:55.145214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 18:21:55.145221] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 18:21:55.145227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.147844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.157268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.157594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.157609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.157616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 18:21:55.157774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 18:21:55.157934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 18:21:55.157942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 18:21:55.157947] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.160564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.170071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.170512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.170556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.170578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.261 [2024-07-24 18:21:55.171156] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.261 [2024-07-24 18:21:55.171746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.261 [2024-07-24 18:21:55.171771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.261 [2024-07-24 18:21:55.171792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.261 [2024-07-24 18:21:55.174481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.261 [2024-07-24 18:21:55.182941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.261 [2024-07-24 18:21:55.183349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.261 [2024-07-24 18:21:55.183363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.261 [2024-07-24 18:21:55.183370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.183550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.183716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.183724] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.183730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.186333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.195721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.196133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.196176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.196198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.196664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.196831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.196839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.196845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.199449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.208478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.208882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.208920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.208943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.209536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.209739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.209747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.209752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.212352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.221291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.221696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.221713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.221720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.221892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.222063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.222071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.222077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.224829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.234328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.234790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.234806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.234813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.234984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.235156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.235164] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.235170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.237969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.247328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.247774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.247791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.247800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.247968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.248134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.248142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.248148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.250872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.260118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.260517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.260532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.260539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.260696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.260854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.260861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.260866] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.263482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.272848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.273292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.273334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.273355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.273924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.274092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.274100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.274105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.276707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.285625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.286066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.286108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.286129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.286719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.286963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.286974] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.286980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.289584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.298461] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.298859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.298874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.298880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.299038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.299196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.299203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.299208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.301812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.311279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.311706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.311722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.311729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.311895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.312062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.312069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.312075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.314701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.324129] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.324556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.324573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.324579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.324750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.324908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.324916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.324921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.327598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.262 [2024-07-24 18:21:55.336965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.262 [2024-07-24 18:21:55.337387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.262 [2024-07-24 18:21:55.337403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.262 [2024-07-24 18:21:55.337410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.262 [2024-07-24 18:21:55.337583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.262 [2024-07-24 18:21:55.337749] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.262 [2024-07-24 18:21:55.337757] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.262 [2024-07-24 18:21:55.337763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.262 [2024-07-24 18:21:55.340513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.522 [2024-07-24 18:21:55.349963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.522 [2024-07-24 18:21:55.350395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-24 18:21:55.350411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.522 [2024-07-24 18:21:55.350417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.522 [2024-07-24 18:21:55.350590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.522 [2024-07-24 18:21:55.350757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.522 [2024-07-24 18:21:55.350765] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.522 [2024-07-24 18:21:55.350770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.522 [2024-07-24 18:21:55.353403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.522 [2024-07-24 18:21:55.362786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.522 [2024-07-24 18:21:55.363211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-24 18:21:55.363226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.363233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.363399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.363571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.363579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.363585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.366186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.375602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.376023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.376038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.376050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.376216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.376387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.376394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.376400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.379009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.388427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.388876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.388892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.388899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.389065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.389232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.389239] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.389245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.391859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.401216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.401620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.401635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.401642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.401800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.401957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.401965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.401970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.404590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.414013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.414454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.414469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.414476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.414649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.414816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.414826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.414832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.417434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.426914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.427307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.427323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.427330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.427503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.427670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.427678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.427684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.430346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.439669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.440112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.440154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.440175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.440687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.440854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.440862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.440868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.443467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.452578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.452945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.452990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.453012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.453526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.453714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.453722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.453728] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.456340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.465426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.465856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.465871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.465878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.466044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.466210] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.466218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.466224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.468907] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.478355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.478784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.478800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.478807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.478973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.523 [2024-07-24 18:21:55.479140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.523 [2024-07-24 18:21:55.479148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.523 [2024-07-24 18:21:55.479153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.523 [2024-07-24 18:21:55.481909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.523 [2024-07-24 18:21:55.491250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.523 [2024-07-24 18:21:55.491661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-24 18:21:55.491678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.523 [2024-07-24 18:21:55.491685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.523 [2024-07-24 18:21:55.491852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.492017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.492025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.492031] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.494731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.504056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.504480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.504501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.504508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.504677] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.504862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.504870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.504876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.507585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.516862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.517265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.517281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.517288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.517454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.517627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.517635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.517641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.520243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.529578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.529922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.529938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.529944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.530111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.530277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.530284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.530291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.532902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.542409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.542831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.542847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.542854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.543020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.543186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.543194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.543203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.545821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.555243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.555669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.555685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.555691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.555858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.556025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.556033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.556038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.558713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.567995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.568422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.568438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.568445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.568637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.568809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.568816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.568823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.571493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.580722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.581148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.581164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.581171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.581336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.581510] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.581518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.581524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.584125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.524 [2024-07-24 18:21:55.593484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.524 [2024-07-24 18:21:55.593935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-24 18:21:55.593984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.524 [2024-07-24 18:21:55.594006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.524 [2024-07-24 18:21:55.594601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.524 [2024-07-24 18:21:55.594818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.524 [2024-07-24 18:21:55.594825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.524 [2024-07-24 18:21:55.594831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.524 [2024-07-24 18:21:55.597433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.786 [2024-07-24 18:21:55.606515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.786 [2024-07-24 18:21:55.606936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.786 [2024-07-24 18:21:55.606978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.786 [2024-07-24 18:21:55.606999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.786 [2024-07-24 18:21:55.607593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.786 [2024-07-24 18:21:55.608032] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.786 [2024-07-24 18:21:55.608040] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.786 [2024-07-24 18:21:55.608046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.786 [2024-07-24 18:21:55.610797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.786 [2024-07-24 18:21:55.619364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.786 [2024-07-24 18:21:55.619766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.787 [2024-07-24 18:21:55.619782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.787 [2024-07-24 18:21:55.619788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.787 [2024-07-24 18:21:55.619955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.787 [2024-07-24 18:21:55.620122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.787 [2024-07-24 18:21:55.620130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.787 [2024-07-24 18:21:55.620136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.787 [2024-07-24 18:21:55.622771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.787 [2024-07-24 18:21:55.632100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.787 [2024-07-24 18:21:55.632519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.787 [2024-07-24 18:21:55.632535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.787 [2024-07-24 18:21:55.632542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.787 [2024-07-24 18:21:55.632709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.787 [2024-07-24 18:21:55.632878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.787 [2024-07-24 18:21:55.632886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.787 [2024-07-24 18:21:55.632892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.787 [2024-07-24 18:21:55.635575] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.787 [2024-07-24 18:21:55.644888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.787 [2024-07-24 18:21:55.645293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.787 [2024-07-24 18:21:55.645308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.787 [2024-07-24 18:21:55.645315] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.787 [2024-07-24 18:21:55.645481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.787 [2024-07-24 18:21:55.645680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.787 [2024-07-24 18:21:55.645688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.787 [2024-07-24 18:21:55.645694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.787 [2024-07-24 18:21:55.648295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.787 [2024-07-24 18:21:55.657682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.787 [2024-07-24 18:21:55.658108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.787 [2024-07-24 18:21:55.658124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.787 [2024-07-24 18:21:55.658130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.787 [2024-07-24 18:21:55.658297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.787 [2024-07-24 18:21:55.658463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.787 [2024-07-24 18:21:55.658470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.787 [2024-07-24 18:21:55.658476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.787 [2024-07-24 18:21:55.661131] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.787 [2024-07-24 18:21:55.670487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.787 [2024-07-24 18:21:55.670924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.787 [2024-07-24 18:21:55.670940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.787 [2024-07-24 18:21:55.670947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.787 [2024-07-24 18:21:55.671114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.787 [2024-07-24 18:21:55.671281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.787 [2024-07-24 18:21:55.671289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.787 [2024-07-24 18:21:55.671295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.787 [2024-07-24 18:21:55.673913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.787 [2024-07-24 18:21:55.683308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.787 [2024-07-24 18:21:55.683733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.787 [2024-07-24 18:21:55.683749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.787 [2024-07-24 18:21:55.683755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.787 [2024-07-24 18:21:55.683922] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.787 [2024-07-24 18:21:55.684088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.787 [2024-07-24 18:21:55.684096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.787 [2024-07-24 18:21:55.684102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.787 [2024-07-24 18:21:55.686726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.787 [2024-07-24 18:21:55.696145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.787 [2024-07-24 18:21:55.696567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.787 [2024-07-24 18:21:55.696582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 18:21:55.696589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 18:21:55.696756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 18:21:55.696922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 18:21:55.696930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 18:21:55.696935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 18:21:55.699563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 18:21:55.709040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 18:21:55.709450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 18:21:55.709466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 18:21:55.709472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 18:21:55.709645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 18:21:55.709812] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 18:21:55.709819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 18:21:55.709825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 18:21:55.712427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 18:21:55.721804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 18:21:55.722155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 18:21:55.722171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 18:21:55.722181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 18:21:55.722348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 18:21:55.722522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 18:21:55.722530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 18:21:55.722536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 18:21:55.725137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 18:21:55.734532] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 18:21:55.734949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 18:21:55.734965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 18:21:55.734972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 18:21:55.735151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 18:21:55.735318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 18:21:55.735326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 18:21:55.735332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 18:21:55.738090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 18:21:55.747433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 18:21:55.747846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.788 [2024-07-24 18:21:55.747862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.788 [2024-07-24 18:21:55.747869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.788 [2024-07-24 18:21:55.748040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.788 [2024-07-24 18:21:55.748212] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.788 [2024-07-24 18:21:55.748220] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.788 [2024-07-24 18:21:55.748226] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.788 [2024-07-24 18:21:55.750919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.788 [2024-07-24 18:21:55.760380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.788 [2024-07-24 18:21:55.760809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 18:21:55.760825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 18:21:55.760832] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 18:21:55.760998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 18:21:55.761164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 18:21:55.761175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 18:21:55.761181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 18:21:55.763902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 18:21:55.773215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 18:21:55.773666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 18:21:55.773710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 18:21:55.773732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 18:21:55.774229] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 18:21:55.774387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 18:21:55.774394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 18:21:55.774400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 18:21:55.777099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 18:21:55.786082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 18:21:55.786499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 18:21:55.786515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 18:21:55.786522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 18:21:55.786688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 18:21:55.786855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 18:21:55.786863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 18:21:55.786869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 18:21:55.789477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 18:21:55.799005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 18:21:55.799458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 18:21:55.799512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 18:21:55.799536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 18:21:55.800040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 18:21:55.800207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 18:21:55.800215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 18:21:55.800221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 18:21:55.804325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 18:21:55.812885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 18:21:55.813336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 18:21:55.813352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 18:21:55.813360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 18:21:55.813548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 18:21:55.813731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 18:21:55.813740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 18:21:55.813747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 18:21:55.816664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.789 [2024-07-24 18:21:55.825721] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.789 [2024-07-24 18:21:55.826148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.789 [2024-07-24 18:21:55.826164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.789 [2024-07-24 18:21:55.826170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.789 [2024-07-24 18:21:55.826336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.789 [2024-07-24 18:21:55.826522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.789 [2024-07-24 18:21:55.826531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.789 [2024-07-24 18:21:55.826537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.789 [2024-07-24 18:21:55.829152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 [2024-07-24 18:21:55.838657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 18:21:55.839078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 18:21:55.839094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 18:21:55.839100] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 18:21:55.839267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 18:21:55.839433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 18:21:55.839441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 18:21:55.839447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 [2024-07-24 18:21:55.842178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 [2024-07-24 18:21:55.851504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 18:21:55.851980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 18:21:55.852023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 18:21:55.852045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 18:21:55.852549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 18:21:55.852717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 18:21:55.852725] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 18:21:55.852730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.790 [2024-07-24 18:21:55.855332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.790 [2024-07-24 18:21:55.864510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.790 [2024-07-24 18:21:55.864946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.790 [2024-07-24 18:21:55.864962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:02.790 [2024-07-24 18:21:55.864969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:02.790 [2024-07-24 18:21:55.865140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:02.790 [2024-07-24 18:21:55.865311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.790 [2024-07-24 18:21:55.865319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.790 [2024-07-24 18:21:55.865326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.052 [2024-07-24 18:21:55.868107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.052 [2024-07-24 18:21:55.877457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.052 [2024-07-24 18:21:55.877824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.052 [2024-07-24 18:21:55.877840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.052 [2024-07-24 18:21:55.877846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.052 [2024-07-24 18:21:55.878018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.052 [2024-07-24 18:21:55.878189] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.052 [2024-07-24 18:21:55.878197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.052 [2024-07-24 18:21:55.878203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.052 [2024-07-24 18:21:55.880918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.052 [2024-07-24 18:21:55.890321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.052 [2024-07-24 18:21:55.890640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.052 [2024-07-24 18:21:55.890657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.052 [2024-07-24 18:21:55.890663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.052 [2024-07-24 18:21:55.890829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.052 [2024-07-24 18:21:55.890996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.052 [2024-07-24 18:21:55.891005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.052 [2024-07-24 18:21:55.891014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.052 [2024-07-24 18:21:55.893689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.052 [2024-07-24 18:21:55.903357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.052 [2024-07-24 18:21:55.903804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.052 [2024-07-24 18:21:55.903848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.052 [2024-07-24 18:21:55.903871] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.904376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.904554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.904563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.904569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.907266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:55.916173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:55.916540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:55.916557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:55.916564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.916732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.916903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.916911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.916917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.919528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:55.928945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:55.929371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:55.929387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:55.929393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.929566] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.929733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.929741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.929747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.932349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:55.941763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:55.942191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:55.942206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:55.942213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.942380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.942553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.942561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.942567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.945168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:55.954560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:55.954986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:55.955001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:55.955008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.955175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.955341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.955349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.955355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.957968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:55.967382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:55.967812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:55.967828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:55.967835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.968002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.968168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.968176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.968182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.970860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:55.980223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:55.980694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:55.980737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:55.980759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.981345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.981802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.981810] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.981816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.984420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:55.993054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:55.993501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:55.993517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:55.993523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:55.993710] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:55.993882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:55.993889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:55.993895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:55.996646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:56.006017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:56.006444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:56.006486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:56.006523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:56.006961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:56.007128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.053 [2024-07-24 18:21:56.007135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.053 [2024-07-24 18:21:56.007141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.053 [2024-07-24 18:21:56.009808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.053 [2024-07-24 18:21:56.018971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.053 [2024-07-24 18:21:56.019393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.053 [2024-07-24 18:21:56.019409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.053 [2024-07-24 18:21:56.019416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.053 [2024-07-24 18:21:56.019605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.053 [2024-07-24 18:21:56.019784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.019792] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.019803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.022450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.031709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.032127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.032143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.032150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.032338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.032517] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.032525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.032532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.035219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.044583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.045030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.045072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.045094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.045684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.046149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.046156] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.046162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.048863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.057506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.057885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.057901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.057907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.058074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.058242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.058250] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.058256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.060920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.070346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.070755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.070807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.070829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.071406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.071999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.072008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.072014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.074705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.083259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.083631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.083648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.083655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.083822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.083989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.083996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.084002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.086687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.096297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.096699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.096715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.096722] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.096918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.097101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.097110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.097116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.099960] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.109238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.109692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.109735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.109756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.110295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.110457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.110464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.110470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.113095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.054 [2024-07-24 18:21:56.122093] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.054 [2024-07-24 18:21:56.122445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.054 [2024-07-24 18:21:56.122461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.054 [2024-07-24 18:21:56.122468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.054 [2024-07-24 18:21:56.122640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.054 [2024-07-24 18:21:56.122807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.054 [2024-07-24 18:21:56.122815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.054 [2024-07-24 18:21:56.122821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.054 [2024-07-24 18:21:56.125484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.313 [2024-07-24 18:21:56.135096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.313 [2024-07-24 18:21:56.135474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.313 [2024-07-24 18:21:56.135528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.313 [2024-07-24 18:21:56.135551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.313 [2024-07-24 18:21:56.136067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.313 [2024-07-24 18:21:56.136235] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.313 [2024-07-24 18:21:56.136242] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.313 [2024-07-24 18:21:56.136249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.313 [2024-07-24 18:21:56.138932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.313 [2024-07-24 18:21:56.148046] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.313 [2024-07-24 18:21:56.148351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.313 [2024-07-24 18:21:56.148367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.313 [2024-07-24 18:21:56.148373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.313 [2024-07-24 18:21:56.148543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.313 [2024-07-24 18:21:56.148710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.313 [2024-07-24 18:21:56.148719] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.313 [2024-07-24 18:21:56.148725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.313 [2024-07-24 18:21:56.151373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.160892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.161170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.161186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.161193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.161359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.161535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.161544] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.161550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.164155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.173759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.174189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.174205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.174212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.174379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.174550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.174558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.174564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.177213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.186724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.187128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.187144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.187151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.187317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.187483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.187496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.187502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.190166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.199588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.200020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.200035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.200045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.200211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.200378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.200386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.200392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.203048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.212479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.212911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.212954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.212976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.213438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.213610] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.213618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.213624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.216230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.225271] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.225654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.225670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.225677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.225843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.226009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.226017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.226023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.228703] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.238131] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.238419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.238434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.238440] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.238611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.238778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.238790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.238796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.241460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.251040] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.251445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.251460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.251467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.251640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.251807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.251814] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.251820] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.254566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.264232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.264542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.264558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.264565] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.264732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.264898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.264906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.264912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.267632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.277224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.277612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.277639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.277646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.277812] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.277978] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.277986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.277992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.280660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.290100] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.290526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.290541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.290548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.290705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.290863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.290870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.290875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.293535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.302982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.303356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.303398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.303420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.304013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.304538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.304546] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.304552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.307241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.315906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.316331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.316347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.316353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.316524] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.316691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.316700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.316705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.319368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.328856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.329241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.329257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.329263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.329433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.329604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.329613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.329619] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.332224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.341681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.342028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.342043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.342049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.342215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.342382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.342390] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.342396] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.345008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.354633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.355010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.355026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.355032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.355199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.355366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.355373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.355379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.358045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.367499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.367916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.367931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.367938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.368104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.368270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.368278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.368287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.370984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.380344] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.380794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.380810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.380817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.380983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.381149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.381157] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.381163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.314 [2024-07-24 18:21:56.383854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.314 [2024-07-24 18:21:56.393390] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.314 [2024-07-24 18:21:56.393734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.314 [2024-07-24 18:21:56.393750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.314 [2024-07-24 18:21:56.393757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.314 [2024-07-24 18:21:56.393928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.314 [2024-07-24 18:21:56.394101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.314 [2024-07-24 18:21:56.394109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.314 [2024-07-24 18:21:56.394115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.396865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 [2024-07-24 18:21:56.406282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.406685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.406702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.406709] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.577 [2024-07-24 18:21:56.406881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.577 [2024-07-24 18:21:56.407052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.577 [2024-07-24 18:21:56.407060] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.577 [2024-07-24 18:21:56.407067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.409698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 [2024-07-24 18:21:56.419067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.419463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.419478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.419485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.577 [2024-07-24 18:21:56.419657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.577 [2024-07-24 18:21:56.419844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.577 [2024-07-24 18:21:56.419852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.577 [2024-07-24 18:21:56.419858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.422567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3556500 Killed "${NVMF_APP[@]}" "$@" 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.577 [2024-07-24 18:21:56.432064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.432365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.432381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.432388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.577 [2024-07-24 18:21:56.432564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.577 [2024-07-24 18:21:56.432736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.577 [2024-07-24 18:21:56.432744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.577 [2024-07-24 18:21:56.432750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.435495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3557971 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3557971 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 3557971 ']' 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:03.577 18:21:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.577 [2024-07-24 18:21:56.445068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.445372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.445387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.445393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.577 [2024-07-24 18:21:56.445569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.577 [2024-07-24 18:21:56.445740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.577 [2024-07-24 18:21:56.445748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.577 [2024-07-24 18:21:56.445754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.448503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 [2024-07-24 18:21:56.458071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.458420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.458443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.458451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.577 [2024-07-24 18:21:56.458628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.577 [2024-07-24 18:21:56.458801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.577 [2024-07-24 18:21:56.458809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.577 [2024-07-24 18:21:56.458816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.461565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 [2024-07-24 18:21:56.471141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.471554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.471571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.471578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.577 [2024-07-24 18:21:56.471750] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.577 [2024-07-24 18:21:56.471922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.577 [2024-07-24 18:21:56.471930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.577 [2024-07-24 18:21:56.471936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.474702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 [2024-07-24 18:21:56.481603] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:27:03.577 [2024-07-24 18:21:56.481641] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.577 [2024-07-24 18:21:56.484200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.484642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.484658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.484665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.577 [2024-07-24 18:21:56.484838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.577 [2024-07-24 18:21:56.485010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.577 [2024-07-24 18:21:56.485018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.577 [2024-07-24 18:21:56.485024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.577 [2024-07-24 18:21:56.487773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.577 [2024-07-24 18:21:56.497338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.577 [2024-07-24 18:21:56.497805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.577 [2024-07-24 18:21:56.497822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.577 [2024-07-24 18:21:56.497829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.498001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.498177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.498185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.498192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.500909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.578 [2024-07-24 18:21:56.510363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.510735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.510752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.510759] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.510930] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.511103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.511111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.511117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.513868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.523442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.523893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.523909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.523916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.524093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.524265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.524273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.524279] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.527027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.536430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.536849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.536865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.536872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.537044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.537215] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.537223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.537229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.539945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.542267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:03.578 [2024-07-24 18:21:56.549487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.549943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.549960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.549968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.550140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.550311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.550320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.550326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.553077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.562536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.562924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.562939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.562947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.563119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.563291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.563299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.563310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.566064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.575498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.575947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.575963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.575970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.576141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.576313] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.576320] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.576327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.579047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.588549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.589022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.589043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.589051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.589225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.589399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.589407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.589415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.592135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.601588] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.602025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.602041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.602049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.602216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.602384] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.602393] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.602399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.605099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.614692] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.578 [2024-07-24 18:21:56.615067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.578 [2024-07-24 18:21:56.615083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.578 [2024-07-24 18:21:56.615090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.578 [2024-07-24 18:21:56.615261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.578 [2024-07-24 18:21:56.615434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.578 [2024-07-24 18:21:56.615443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.578 [2024-07-24 18:21:56.615449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.578 [2024-07-24 18:21:56.618197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.578 [2024-07-24 18:21:56.622834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.578 [2024-07-24 18:21:56.622858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.579 [2024-07-24 18:21:56.622865] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.579 [2024-07-24 18:21:56.622871] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.579 [2024-07-24 18:21:56.622876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.579 [2024-07-24 18:21:56.622912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.579 [2024-07-24 18:21:56.622999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.579 [2024-07-24 18:21:56.623000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.579 [2024-07-24 18:21:56.627770] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.579 [2024-07-24 18:21:56.628148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.579 [2024-07-24 18:21:56.628166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.579 [2024-07-24 18:21:56.628174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.579 [2024-07-24 18:21:56.628347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.579 [2024-07-24 18:21:56.628525] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.579 [2024-07-24 18:21:56.628534] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.579 [2024-07-24 18:21:56.628541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.579 [2024-07-24 18:21:56.631288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.579 [2024-07-24 18:21:56.640860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.579 [2024-07-24 18:21:56.641300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.579 [2024-07-24 18:21:56.641320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.579 [2024-07-24 18:21:56.641328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.579 [2024-07-24 18:21:56.641507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.579 [2024-07-24 18:21:56.641681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.579 [2024-07-24 18:21:56.641691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.579 [2024-07-24 18:21:56.641707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.579 [2024-07-24 18:21:56.644453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.579 [2024-07-24 18:21:56.653867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.579 [2024-07-24 18:21:56.654328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.579 [2024-07-24 18:21:56.654347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.579 [2024-07-24 18:21:56.654355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.579 [2024-07-24 18:21:56.654533] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.579 [2024-07-24 18:21:56.654708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.579 [2024-07-24 18:21:56.654717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.579 [2024-07-24 18:21:56.654723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.893 [2024-07-24 18:21:56.657464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.893 [2024-07-24 18:21:56.666876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.893 [2024-07-24 18:21:56.667304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.893 [2024-07-24 18:21:56.667323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.893 [2024-07-24 18:21:56.667331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.893 [2024-07-24 18:21:56.667510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.893 [2024-07-24 18:21:56.667684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.893 [2024-07-24 18:21:56.667693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.893 [2024-07-24 18:21:56.667700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.893 [2024-07-24 18:21:56.670448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.893 [2024-07-24 18:21:56.679869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.893 [2024-07-24 18:21:56.680318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.893 [2024-07-24 18:21:56.680337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.893 [2024-07-24 18:21:56.680345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.893 [2024-07-24 18:21:56.680523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.893 [2024-07-24 18:21:56.680697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.893 [2024-07-24 18:21:56.680706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.893 [2024-07-24 18:21:56.680714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.893 [2024-07-24 18:21:56.683458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.893 [2024-07-24 18:21:56.692867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.893 [2024-07-24 18:21:56.693239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.893 [2024-07-24 18:21:56.693255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.893 [2024-07-24 18:21:56.693262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.893 [2024-07-24 18:21:56.693435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.893 [2024-07-24 18:21:56.693611] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.893 [2024-07-24 18:21:56.693621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.893 [2024-07-24 18:21:56.693627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.696370] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.705938] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.706301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.706317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.706325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.706498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.706687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.706696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.706703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.709455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.719003] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.719443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.719460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.719467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.719644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.719817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.719827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.719833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.722578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.732061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.732429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.732446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.732453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.732634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.732807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.732816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.732822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.735568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.745127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.745499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.745516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.745523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.745696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.745868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.745877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.745883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.748634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.758199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.758566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.758582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.758590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.758762] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.758934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.758944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.758950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.761698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.771269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.771665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.771682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.771690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.771862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.772034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.772043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.772053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.774803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.784236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.784600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.784617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.784625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.784796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.784969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.784978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.784984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.787731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.797292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.797733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.797750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.797757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.797931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.798102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.798112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.798118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.800866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.810266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.810679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.810695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.810703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.810874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.811046] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.894 [2024-07-24 18:21:56.811055] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.894 [2024-07-24 18:21:56.811061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.894 [2024-07-24 18:21:56.813806] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.894 [2024-07-24 18:21:56.823214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.894 [2024-07-24 18:21:56.823596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.894 [2024-07-24 18:21:56.823616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.894 [2024-07-24 18:21:56.823623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.894 [2024-07-24 18:21:56.823802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.894 [2024-07-24 18:21:56.823969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.823978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.823984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.826732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.836295] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.836713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.836730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.836738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.836909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.837082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.837091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.837097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.839845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.849241] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.849675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.849691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.849698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.849881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.850048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.850057] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.850064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.852815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.862214] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.862624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.862641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.862649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.862821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.862998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.863008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.863015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.865766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.875173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.875542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.875559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.875566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.875739] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.875911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.875920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.875926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.878680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.888247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.888660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.888677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.888684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.888858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.889030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.889039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.889046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.891792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.901394] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.901732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.901751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.901758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.901932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.902105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.902114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.902121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.904880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.914464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.914906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.914924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.914932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.915104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.915276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.915285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.915291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.918056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.927464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.927782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.927800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.927808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.927981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.928154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.928163] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.928169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.930916] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.940482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.940902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.940919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.940926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.941098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.895 [2024-07-24 18:21:56.941270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.895 [2024-07-24 18:21:56.941279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.895 [2024-07-24 18:21:56.941286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.895 [2024-07-24 18:21:56.944030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.895 [2024-07-24 18:21:56.953432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.895 [2024-07-24 18:21:56.953853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.895 [2024-07-24 18:21:56.953870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.895 [2024-07-24 18:21:56.953880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.895 [2024-07-24 18:21:56.954053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.896 [2024-07-24 18:21:56.954225] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.896 [2024-07-24 18:21:56.954234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.896 [2024-07-24 18:21:56.954241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.896 [2024-07-24 18:21:56.956989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.896 [2024-07-24 18:21:56.966389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.896 [2024-07-24 18:21:56.966834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.896 [2024-07-24 18:21:56.966850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:03.896 [2024-07-24 18:21:56.966858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:03.896 [2024-07-24 18:21:56.967030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:03.896 [2024-07-24 18:21:56.967202] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.896 [2024-07-24 18:21:56.967211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.896 [2024-07-24 18:21:56.967218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.896 [2024-07-24 18:21:56.969964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:56.979368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:56.979813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:56.979830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:56.979837] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:56.980009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.153 [2024-07-24 18:21:56.980181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.153 [2024-07-24 18:21:56.980190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.153 [2024-07-24 18:21:56.980196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.153 [2024-07-24 18:21:56.982949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:56.992349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:56.992790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:56.992807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:56.992814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:56.992987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.153 [2024-07-24 18:21:56.993160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.153 [2024-07-24 18:21:56.993172] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.153 [2024-07-24 18:21:56.993179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.153 [2024-07-24 18:21:56.995925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:57.005331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:57.005761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:57.005778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:57.005786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:57.005958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.153 [2024-07-24 18:21:57.006131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.153 [2024-07-24 18:21:57.006140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.153 [2024-07-24 18:21:57.006146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.153 [2024-07-24 18:21:57.008902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:57.018306] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:57.018755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:57.018771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:57.018778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:57.018950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.153 [2024-07-24 18:21:57.019122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.153 [2024-07-24 18:21:57.019132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.153 [2024-07-24 18:21:57.019138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.153 [2024-07-24 18:21:57.021886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:57.031285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:57.031682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:57.031700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:57.031707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:57.031880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.153 [2024-07-24 18:21:57.032052] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.153 [2024-07-24 18:21:57.032061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.153 [2024-07-24 18:21:57.032067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.153 [2024-07-24 18:21:57.034816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:57.044388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:57.044829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:57.044845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:57.044853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:57.045026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.153 [2024-07-24 18:21:57.045198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.153 [2024-07-24 18:21:57.045207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.153 [2024-07-24 18:21:57.045214] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.153 [2024-07-24 18:21:57.047958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:57.057354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:57.057790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:57.057807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:57.057814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:57.057986] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.153 [2024-07-24 18:21:57.058158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.153 [2024-07-24 18:21:57.058167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.153 [2024-07-24 18:21:57.058173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.153 [2024-07-24 18:21:57.060918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.153 [2024-07-24 18:21:57.070315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.153 [2024-07-24 18:21:57.070673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.153 [2024-07-24 18:21:57.070690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.153 [2024-07-24 18:21:57.070697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.153 [2024-07-24 18:21:57.070869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.071042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.071051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.071058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.073816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.083374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.083814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.083831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.083838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.084013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.084185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.084194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.084201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.086954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.096351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.096773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.096790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.096797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.096970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.097142] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.097151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.097158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.099905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.109318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.109653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.109671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.109678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.109849] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.110022] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.110031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.110038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.112789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.122353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.122793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.122809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.122816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.122988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.123159] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.123167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.123177] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.125923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.135334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.135778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.135795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.135802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.135974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.136145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.136154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.136160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.138905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.148312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.148760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.148777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.148784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.148956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.149128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.149136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.149142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.151889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.161291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.161741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.161757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.161764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.161935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.162107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.162115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.162121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.164871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.174270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.174708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.174724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.174732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.174904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.175077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.175085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.175092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.177834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.187233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.187675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.187691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.187698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.187869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.188040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.188048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.188054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.190802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.200201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.200537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.200553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.200560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.200732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.200905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.200913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.200919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.203668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.213234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.213675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.213692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.213700] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.213873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.214049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.214058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.214065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.216813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.154 [2024-07-24 18:21:57.226210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.154 [2024-07-24 18:21:57.226628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.154 [2024-07-24 18:21:57.226645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.154 [2024-07-24 18:21:57.226652] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.154 [2024-07-24 18:21:57.226823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.154 [2024-07-24 18:21:57.226994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.154 [2024-07-24 18:21:57.227002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.154 [2024-07-24 18:21:57.227008] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.154 [2024-07-24 18:21:57.229754] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.413 [2024-07-24 18:21:57.239312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.413 [2024-07-24 18:21:57.239636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.413 [2024-07-24 18:21:57.239652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.413 [2024-07-24 18:21:57.239659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.413 [2024-07-24 18:21:57.239831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.413 [2024-07-24 18:21:57.240003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.413 [2024-07-24 18:21:57.240011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.240017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.242765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 [2024-07-24 18:21:57.252342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.252793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.252810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.252816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.252988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.253160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.253170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.253176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.255927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 [2024-07-24 18:21:57.265335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.265755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.265771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.265778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.265950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.266121] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.266129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.266135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.268881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 [2024-07-24 18:21:57.278280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.278704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.278721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.278728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.278900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.279073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.279081] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.279087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.281832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:27:04.414 [2024-07-24 18:21:57.291240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:04.414 [2024-07-24 18:21:57.291640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.291658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.291665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.414 [2024-07-24 18:21:57.291837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.292009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.292018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.292024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.294780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 [2024-07-24 18:21:57.304193] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.304637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.304653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.304660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.304832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.305004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.305012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.305018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.307765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 [2024-07-24 18:21:57.317179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.317600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.317617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.317624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.317796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.317972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.317980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.317987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.320732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.414 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.414 [2024-07-24 18:21:57.330135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.330510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.330526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.330533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.330705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.330878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.330886] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.330896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.333644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 [2024-07-24 18:21:57.333731] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.414 [2024-07-24 18:21:57.343201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.343616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.343632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.343639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.343811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.343987] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.343995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.344001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.346744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.414 [2024-07-24 18:21:57.356301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.414 [2024-07-24 18:21:57.356705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.414 [2024-07-24 18:21:57.356721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.414 [2024-07-24 18:21:57.356728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.414 [2024-07-24 18:21:57.356900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.414 [2024-07-24 18:21:57.357071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.414 [2024-07-24 18:21:57.357079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.414 [2024-07-24 18:21:57.357085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.414 [2024-07-24 18:21:57.359830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.415 [2024-07-24 18:21:57.369389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.415 [2024-07-24 18:21:57.369758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.415 [2024-07-24 18:21:57.369774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.415 [2024-07-24 18:21:57.369781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.415 [2024-07-24 18:21:57.369953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.415 [2024-07-24 18:21:57.370124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.415 [2024-07-24 18:21:57.370132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.415 [2024-07-24 18:21:57.370142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.415 [2024-07-24 18:21:57.372897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.415 Malloc0 00:27:04.415 [2024-07-24 18:21:57.382464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.415 [2024-07-24 18:21:57.382894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.415 [2024-07-24 18:21:57.382912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.415 [2024-07-24 18:21:57.382920] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.415 [2024-07-24 18:21:57.383092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.415 [2024-07-24 18:21:57.383263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.415 [2024-07-24 18:21:57.383272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.415 [2024-07-24 18:21:57.383278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.415 [2024-07-24 18:21:57.386022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.415 [2024-07-24 18:21:57.395420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.415 [2024-07-24 18:21:57.395821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.415 [2024-07-24 18:21:57.395838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24e5980 with addr=10.0.0.2, port=4420 00:27:04.415 [2024-07-24 18:21:57.395845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24e5980 is same with the state(5) to be set 00:27:04.415 [2024-07-24 18:21:57.396017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24e5980 (9): Bad file descriptor 00:27:04.415 [2024-07-24 18:21:57.396188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.415 [2024-07-24 18:21:57.396196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.415 [2024-07-24 18:21:57.396203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.415 [2024-07-24 18:21:57.398951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.415 [2024-07-24 18:21:57.406002] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.415 [2024-07-24 18:21:57.408513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.415 18:21:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3556864 00:27:04.415 [2024-07-24 18:21:57.438961] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:14.446 00:27:14.446 Latency(us) 00:27:14.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.446 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:14.446 Verification LBA range: start 0x0 length 0x4000 00:27:14.446 Nvme1n1 : 15.01 8311.90 32.47 12999.93 0.00 5986.62 639.76 16852.11 00:27:14.446 =================================================================================================================== 00:27:14.446 Total : 8311.90 32.47 12999.93 0.00 5986.62 639.76 16852.11 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.446 rmmod nvme_tcp 00:27:14.446 rmmod nvme_fabrics 00:27:14.446 rmmod nvme_keyring 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3557971 ']' 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3557971 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 3557971 ']' 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 3557971 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3557971 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3557971' 00:27:14.446 killing process with pid 3557971 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 3557971 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 3557971 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.446 18:22:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.383 18:22:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:15.383 00:27:15.383 real 0m26.355s 00:27:15.383 user 1m3.130s 00:27:15.383 sys 0m6.323s 00:27:15.383 18:22:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:15.383 18:22:08 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.383 ************************************ 00:27:15.383 END TEST nvmf_bdevperf 00:27:15.383 ************************************ 00:27:15.383 18:22:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:15.383 18:22:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:15.383 18:22:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:15.383 18:22:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.642 ************************************ 00:27:15.642 START TEST nvmf_target_disconnect 00:27:15.642 ************************************ 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:15.642 * Looking for test storage... 00:27:15.642 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.642 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.643 18:22:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:20.912 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.912 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:20.913 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:20.913 Found net devices under 0000:86:00.0: cvl_0_0 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:20.913 Found net devices under 0000:86:00.1: cvl_0_1 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.913 18:22:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:21.173 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.173 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:27:21.173 00:27:21.173 --- 10.0.0.2 ping statistics --- 00:27:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.173 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.173 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.173 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:27:21.173 00:27:21.173 --- 10.0.0.1 ping statistics --- 00:27:21.173 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.173 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:21.173 ************************************ 00:27:21.173 START TEST nvmf_target_disconnect_tc1 00:27:21.173 ************************************ 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.173 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.173 [2024-07-24 18:22:14.186490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:21.173 [2024-07-24 18:22:14.186600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6a6e60 with addr=10.0.0.2, port=4420 00:27:21.173 [2024-07-24 18:22:14.186654] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:21.173 [2024-07-24 18:22:14.186679] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:21.173 [2024-07-24 18:22:14.186697] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:21.173 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:21.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:21.173 Initializing NVMe Controllers 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:21.173 00:27:21.173 real 0m0.097s 00:27:21.173 user 0m0.041s 00:27:21.173 sys 0m0.056s 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:21.173 ************************************ 00:27:21.173 END TEST nvmf_target_disconnect_tc1 00:27:21.173 ************************************ 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:21.173 ************************************ 00:27:21.173 START TEST nvmf_target_disconnect_tc2 00:27:21.173 ************************************ 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:21.173 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3562925 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3562925 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3562925 ']' 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:21.433 18:22:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.433 [2024-07-24 18:22:14.304256] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:27:21.433 [2024-07-24 18:22:14.304294] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.433 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.433 [2024-07-24 18:22:14.372114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:21.433 [2024-07-24 18:22:14.448202] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:21.433 [2024-07-24 18:22:14.448239] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:21.433 [2024-07-24 18:22:14.448245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:21.433 [2024-07-24 18:22:14.448251] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:21.433 [2024-07-24 18:22:14.448256] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:21.433 [2024-07-24 18:22:14.448368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:21.433 [2024-07-24 18:22:14.448472] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:21.433 [2024-07-24 18:22:14.448581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:21.433 [2024-07-24 18:22:14.448582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.368 Malloc0 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.368 [2024-07-24 18:22:15.168760] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.368 [2024-07-24 18:22:15.197733] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3563162 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:22.368 18:22:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:22.368 EAL: No free 2048 kB hugepages reported on node 1 00:27:24.281 18:22:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3562925 00:27:24.281 18:22:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 [2024-07-24 18:22:17.225227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Write completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.281 Read completed with error (sct=0, sc=8) 00:27:24.281 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 [2024-07-24 18:22:17.225428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 [2024-07-24 18:22:17.225638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Read completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 Write completed with error (sct=0, sc=8) 00:27:24.282 starting I/O failed 00:27:24.282 [2024-07-24 18:22:17.225843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:24.282 [2024-07-24 18:22:17.226117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.226134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.226377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.226389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.226495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.226506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.226731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.226762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.226981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.227012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.227301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.227330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.227579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.227612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.227747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.227778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.228055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.228066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.282 [2024-07-24 18:22:17.228229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.282 [2024-07-24 18:22:17.228259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.282 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.228482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.228521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.228801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.228832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.228984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.229014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.229226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.229258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.229456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.229487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.229777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.229808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.230100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.230131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.230411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.230441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.230736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.230768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.230916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.230946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.231202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.231232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.231372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.231402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.231666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.231698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.231886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.231917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.232113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.232144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.232426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.232456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.232734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.232765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.233026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.233056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.233373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.233403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.233597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.233644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.233805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.233816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.233998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.234009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.234248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.234260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.234431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.234447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.234557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.234575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.234744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.234756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.234854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.234864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.234989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.235022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.235346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.235376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.235597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.235629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.235844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.235885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.236003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.236015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.236247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.236258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.236485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.236500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.236705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.236737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.237017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.237048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.237336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.283 [2024-07-24 18:22:17.237366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.283 qpair failed and we were unable to recover it. 00:27:24.283 [2024-07-24 18:22:17.237515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.237547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.237755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.237786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.237964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.237975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.238141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.238151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.238264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.238274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.238515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.238526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.238783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.238795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.238971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.238982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.239139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.239150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.239290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.239302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.239543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.239554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.239645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.239655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.239803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.239814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.239953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.239964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.240165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.240196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.240390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.240420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.240612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.240645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.240874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.240886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.240977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.240987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.241207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.241218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.241447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.241458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.241636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.241648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.241860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.241896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.242169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.242201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.242479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.242548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.242765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.242777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.242932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.242962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.243248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.243279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.243558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.243591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.243732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.243763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.243961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.243972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.244210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.244247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.244475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.244517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.244790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.244821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.245126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.245157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.245423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.245454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.245680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.245713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.284 qpair failed and we were unable to recover it. 00:27:24.284 [2024-07-24 18:22:17.245976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.284 [2024-07-24 18:22:17.246007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.246316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.246348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.246612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.246644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.246918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.246949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.247186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.247218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.247535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.247568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.247799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.247830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.248023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.248055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.248310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.248321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.248548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.248559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.248707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.248718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.248874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.248906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.249191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.249222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.249409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.249441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.249636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.249648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.249902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.249933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.250207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.250239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.250440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.250470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.250709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.250720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.250884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.250915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.251165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.251197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.251468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.251512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.251820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.251851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.252061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.252093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.252318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.252349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.252484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.252555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.252762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.252774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.252977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.252988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.253199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.253210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.253426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.253457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.253720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.253753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.253952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.253963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.254178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.254209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.254413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.254444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.254726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.254758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.255028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.255059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.255242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.255273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.255521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.285 [2024-07-24 18:22:17.255553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.285 qpair failed and we were unable to recover it. 00:27:24.285 [2024-07-24 18:22:17.255806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.255840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.256062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.256074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.256228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.256240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.256398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.256429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.256703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.256736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.256946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.256976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.257250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.257281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.257463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.257506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.257753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.257765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.257930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.257961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.258188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.258220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.258478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.258539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.258774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.258784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.259060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.259093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.259352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.259422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.259680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.259715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.260017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.260049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.260212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.260243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.260511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.260543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.260748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.260764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.260933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.260950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.261055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.261070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.261323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.261339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.261504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.261518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.261637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.261649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.261808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.261819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.261917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.261928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.262021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.262033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.262214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.286 [2024-07-24 18:22:17.262225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.286 qpair failed and we were unable to recover it. 00:27:24.286 [2024-07-24 18:22:17.262330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.262342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.262546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.262578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.262858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.262889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.263020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.263031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.263190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.263202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.263315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.263326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.263481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.263497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.263611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.263622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.263736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.263747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.264004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.264015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.264232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.264244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.264404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.264416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.264586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.264598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.264721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.264734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.264943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.264955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.265058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.265070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.265255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.265287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.265426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.265457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.265615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.265647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.265838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.265849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.266011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.266023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.266189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.266199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.266353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.266364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.266579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.266591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.266802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.266813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.266917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.266928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.267091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.267122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.267347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.267378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.267610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.267643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.267807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.267844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.268050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.268061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.268289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.268300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.268554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.268566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.268774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.268785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.269019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.269051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.269318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.269350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.269651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.269663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.287 [2024-07-24 18:22:17.269899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.287 [2024-07-24 18:22:17.269910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.287 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.270054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.270067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.270267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.270278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.270509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.270521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.270630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.270640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.270796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.270807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.270909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.270919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.271177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.271187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.271413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.271425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.271585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.271597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.271793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.271804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.271955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.271966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.272215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.272226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.272430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.272441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.272632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.272667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.272873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.272904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.273123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.273154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.273353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.273384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.273649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.273681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.273931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.273962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.274154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.274166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.274274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.274286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.274390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.274400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.274551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.274563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.274648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.274658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.274749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.274759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.274932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.274944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.275102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.275113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.275412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.275423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.275655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.275667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.275876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.275887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.276059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.276070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.276319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.276330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.276478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.276489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.276665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.276676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.276811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.276822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.277046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.277057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.277198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.277209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.288 qpair failed and we were unable to recover it. 00:27:24.288 [2024-07-24 18:22:17.277363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.288 [2024-07-24 18:22:17.277394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.277596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.277628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.277878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.277909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.278074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.278088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.278347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.278358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.278447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.278457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.278624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.278635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.278798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.278809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.278969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.278980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.279075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.279085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.279290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.279301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.279559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.279603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.279832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.279843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.280075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.280086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.280217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.280248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.280547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.280579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.280780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.280791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.280978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.280989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.281210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.281221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.281472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.281483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.281650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.281662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.281887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.281898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.282038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.282049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.282236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.282247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.282332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.282342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.282499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.282511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.282696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.282707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.282936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.282947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.283204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.283215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.283368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.283379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.283588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.283600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.283696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.283705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.283857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.283867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.284119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.284130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.284318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.284329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.284484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.284499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.284720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.284732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.289 qpair failed and we were unable to recover it. 00:27:24.289 [2024-07-24 18:22:17.284884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.289 [2024-07-24 18:22:17.284896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.284998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.285009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.285220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.285231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.285374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.285385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.285622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.285654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.285855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.285886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.286137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.286168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.286448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.286479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.286697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.286708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.286919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.286950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.287111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.287141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.287417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.287448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.287652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.287684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.287930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.287942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.288042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.288052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.288288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.288318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.288513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.288546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.288689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.288720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.288921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.288933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.289138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.289169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.289446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.289477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.289756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.289767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.289927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.289958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.290235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.290266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.290481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.290544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.290767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.290798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.290997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.291027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.291275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.291306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.291580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.291612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.291882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.291914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.292117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.292148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.292421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.292452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.292685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.292717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.292967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.290 [2024-07-24 18:22:17.293008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.290 qpair failed and we were unable to recover it. 00:27:24.290 [2024-07-24 18:22:17.293258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.293269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.293513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.293545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.293778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.293808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.294056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.294088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.294365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.294396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.294586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.294619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.294811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.294822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.295058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.295089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.295276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.295306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.295423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.295454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.295751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.295783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.296027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.296038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.296314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.296325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.296554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.296565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.296730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.296740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.296920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.296951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.297084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.297115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.297265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.297295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.297556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.297588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.297776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.297788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.297939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.297971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.298158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.298189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.298460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.298507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.298785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.298817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.299094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.299125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.299314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.299345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.299611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.299644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.299838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.299869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.300062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.300073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.300181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.300212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.300367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.300398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.300607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.300639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.300894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.300905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.301102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.301113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.301271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.301282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.291 [2024-07-24 18:22:17.301508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.291 [2024-07-24 18:22:17.301519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.291 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.301736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.301748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.301932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.301964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.302213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.302244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.302450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.302486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.302765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.302796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.303075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.303105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.303316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.303347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.303598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.303631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.303819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.303830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.303988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.303999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.304148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.304159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.304314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.304345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.304555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.304587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.304803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.304834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.305110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.305142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.305409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.305440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.305732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.305765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.306043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.306074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.306362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.306393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.306676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.306708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.306987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.307018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.307278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.307309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.307586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.307618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.307901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.307932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.308053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.308063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.308286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.308297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.308457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.308468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.308698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.308730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.308885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.308916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.309211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.309242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.309525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.309557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.309776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.309807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.310058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.310089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.310349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.310380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.310681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.310714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.310986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.311016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.311230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.311261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.292 [2024-07-24 18:22:17.311508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.292 [2024-07-24 18:22:17.311539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.292 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.311724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.311755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.312035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.312067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.312286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.312297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.312461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.312472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.312706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.312738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.312933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.312969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.313226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.313257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.313556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.313588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.313862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.313892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.314189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.314220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.314508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.314541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.314828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.314859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.315099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.315109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.315271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.315303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.315512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.315544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.315841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.315872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.316149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.316187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.316476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.316517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.316789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.316820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.317049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.317080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.317372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.317383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.317533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.317544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.317694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.317705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.317857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.317888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.318135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.318165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.318351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.318382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.318679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.318711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.318980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.318991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.319219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.319230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.319387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.319398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.319617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.319628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.319858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.319889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.320148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.320179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.320445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.320457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.320613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.320625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.320857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.320888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.321159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.293 [2024-07-24 18:22:17.321190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.293 qpair failed and we were unable to recover it. 00:27:24.293 [2024-07-24 18:22:17.321454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.321485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.321761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.321792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.322006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.322038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.322307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.322338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.322548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.322581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.322855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.322886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.323092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.323124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.323371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.323382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.323589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.323603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.323756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.323787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.324063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.324094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.324296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.324326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.324509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.324541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.324817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.324849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.325134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.325145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.325354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.325365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.325571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.325582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.325734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.325745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.325910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.325942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.326166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.326198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.326350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.326381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.326680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.326712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.326986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.326997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.327211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.327242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.327515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.327547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.327823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.327854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.328040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.328071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.328336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.328355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.328588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.328620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.328892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.328904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.329154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.329186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.329450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.329481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.329788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.329820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.330098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.330129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.330387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.330398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.330498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.330509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.330647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.330657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.330840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.294 [2024-07-24 18:22:17.330851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.294 qpair failed and we were unable to recover it. 00:27:24.294 [2024-07-24 18:22:17.331113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.331124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.331366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.331397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.331676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.331708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.331916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.331947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.332162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.332193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.332464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.332504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.332714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.332746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.332970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.333000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.333269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.333299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.333444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.333475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.333696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.333733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.333937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.333967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.334177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.334188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.334360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.334392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.334662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.334694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.334983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.335013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.335297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.335328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.335512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.335544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.335819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.335856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.336109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.336147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.336417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.336448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.336676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.336708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.336985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.337016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.337303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.337314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.337552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.337584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.337847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.337879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.338112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.338123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.338356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.338367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.338466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.338476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.338740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.338772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.339042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.339073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.339292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.295 [2024-07-24 18:22:17.339303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.295 qpair failed and we were unable to recover it. 00:27:24.295 [2024-07-24 18:22:17.339462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.339505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.339810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.339841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.340064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.340095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.340229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.340260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.340554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.340586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.340720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.340752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.341027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.341059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.341304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.341315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.341543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.341576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.341884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.341915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.342134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.342146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.342220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.342230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.342447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.342459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.342664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.342676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.342911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.342942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.343148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.343179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.343452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.343482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.343761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.343793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.344049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.344080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.344297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.344334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.344532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.344565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.344789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.344819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.345064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.345075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.345213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.345224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.345312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.345322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.345426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.345457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.345742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.345774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.345986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.345996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.346201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.346212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.346447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.346458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.346666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.346678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.346974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.347005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.347218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.347249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.347511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.347544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.347844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.347875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.348152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.348183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.348476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.348517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.348788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.296 [2024-07-24 18:22:17.348820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.296 qpair failed and we were unable to recover it. 00:27:24.296 [2024-07-24 18:22:17.349016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.349027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.349283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.349295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.349453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.349465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.349693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.349704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.349877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.349905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.350129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.350140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.350282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.350293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.350455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.350466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.350604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.350615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.350761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.350772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.351029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.351040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.351270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.351282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.351543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.351555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.351765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.351776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.351950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.351961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.352171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.352182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.352418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.352449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.352663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.352695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.352842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.352854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.353061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.353073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.353249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.353262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.353476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.353487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.353685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.353697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.353862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.353892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.354167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.354198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.354399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.354430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.354707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.354740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.354976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.355007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.297 [2024-07-24 18:22:17.355291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.297 [2024-07-24 18:22:17.355302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.297 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.355533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.355545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.355783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.355794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.356045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.356056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.356299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.356311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.356464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.356475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.356591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.356603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.356762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.356774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.356862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.356873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.357081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.357093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.357353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.357364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.357540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.357573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.357717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.357748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.357898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.357929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.358145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.358176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.358461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.358502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.358815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.358846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.359083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.359095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.359250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.359281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.359587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.359620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.359894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.359926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.360140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.360171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.360344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.360356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.580 qpair failed and we were unable to recover it. 00:27:24.580 [2024-07-24 18:22:17.360596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.580 [2024-07-24 18:22:17.360608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.360773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.360784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.360940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.360952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.361175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.361186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.361415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.361427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.361613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.361625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.361807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.361838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.362135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.362167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.362351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.362363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.362603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.362641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.362794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.362825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.363038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.363070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.363296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.363328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.363579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.363612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.363892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.363923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.364210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.364242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.364524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.364556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.364699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.364730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.364983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.365014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.365226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.365257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.365527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.365559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.365861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.365893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.366167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.366198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.366487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.366529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.366803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.366834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.367116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.367166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.367439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.367470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.367756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.367803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.367945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.367957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.368194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.368225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.368450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.368482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.368697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.368729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.368916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.368927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.369167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.369198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.369454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.369485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.369744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.369776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.369995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.370026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.581 qpair failed and we were unable to recover it. 00:27:24.581 [2024-07-24 18:22:17.370214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.581 [2024-07-24 18:22:17.370246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.370451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.370483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.370762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.370794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.370996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.371028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.371288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.371299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.371559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.371591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.371778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.371811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.372064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.372096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.372310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.372321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.372559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.372591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.372739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.372771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.373022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.373055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.373334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.373370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.373695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.373728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.373932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.373964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.374265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.374297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.374524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.374556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.374689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.374721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.374870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.374881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.375143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.375174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.375390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.375422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.375693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.375725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.376021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.376054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.376330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.376342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.376580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.376613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.376821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.376853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.377112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.377124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.377304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.377316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.377567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.377578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.377770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.377801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.378111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.378142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.378425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.378456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.378686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.378718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.378942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.378973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.379190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.379230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.379458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.379470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.379626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.379638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.379878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.582 [2024-07-24 18:22:17.379890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.582 qpair failed and we were unable to recover it. 00:27:24.582 [2024-07-24 18:22:17.380057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.380089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.380308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.380340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.380648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.380681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.380883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.380915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.381115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.381147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.381424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.381455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.381696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.381729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.381920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.381932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.382166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.382178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.382412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.382424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.382589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.382601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.382837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.382849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.383071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.383083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.383338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.383350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.383518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.383532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.383750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.383761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.383903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.383915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.384155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.384184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.384470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.384513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.384733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.384764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.385017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.385048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.385252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.385283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.385538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.385572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.385777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.385809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.385990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.386002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.386248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.386279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.386477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.386519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.386717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.386747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.386886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.386917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.387194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.387225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.387487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.387502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.387643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.387654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.387838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.387869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.388067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.388098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.388321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.388352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.388632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.388664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.388949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.388981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.583 [2024-07-24 18:22:17.389233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.583 [2024-07-24 18:22:17.389264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.583 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.389466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.389507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.389698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.389730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.389933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.389964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.390174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.390206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.390409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.390441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.390709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.390741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.390887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.390899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.391163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.391195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.391412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.391443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.391740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.391773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.391970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.392002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.392311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.392343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.392601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.392634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.392859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.392891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.393175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.393207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.393399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.393410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.393628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.393670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.393862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.393894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.394169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.394201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.394476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.394518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.394708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.394740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.395019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.395051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.395259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.395290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.395506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.395538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.395743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.395774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.396027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.396038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.396229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.396261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.396568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.396601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.396919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.396952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.397158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.397214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.397439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.397470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.397784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.397816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.398082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.398114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.584 [2024-07-24 18:22:17.398312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.584 [2024-07-24 18:22:17.398344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.584 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.398618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.398650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.398862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.398900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.399161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.399190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.399425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.399456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.399751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.399784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.400064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.400096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.400393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.400425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.400706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.400738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.400887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.400919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.401143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.401176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.401397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.401428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.401686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.401718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.402026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.402058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.402348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.402380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.402665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.402698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.402983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.403015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.403210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.403222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.403399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.403431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.403710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.403743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.404023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.404054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.404316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.404348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.404626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.404659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.404962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.404999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.405195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.405227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.405428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.405440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.405550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.405561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.405712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.405724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.405824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.405835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.406080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.406112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.406418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.406450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.406672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.406704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.406901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.406933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.407157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.407189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.585 qpair failed and we were unable to recover it. 00:27:24.585 [2024-07-24 18:22:17.407476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.585 [2024-07-24 18:22:17.407517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.407673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.407704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.407961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.407993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.408271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.408303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.408586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.408619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.408847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.408879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.409163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.409194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.409389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.409401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.409646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.409678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.409884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.409917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.410199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.410231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.410511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.410544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.410805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.410837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.411089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.411100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.411346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.411359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.411648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.411679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.411887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebff0 is same with the state(5) to be set 00:27:24.586 [2024-07-24 18:22:17.412205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.412280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.412538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.412576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.412858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.412890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.413164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.413197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.413416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.413447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.413724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.413757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.413910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.413942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.414156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.414187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.414391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.414423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.414700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.414733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.415042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.415073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.415342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.415373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.415656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.415689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.415909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.415950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.416149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.416181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.416393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.416409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.416638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.416672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.416952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.416991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.417216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.417233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.417410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.417427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.417590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.586 [2024-07-24 18:22:17.417622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.586 qpair failed and we were unable to recover it. 00:27:24.586 [2024-07-24 18:22:17.417880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.417911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.418184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.418201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.418442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.418459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.418572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.418588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.418838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.418854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.419033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.419065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.419261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.419292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.419618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.419651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.419974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.420005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.420267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.420298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.420607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.420640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.420910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.420942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.421078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.421109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.421346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.421377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.421572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.421604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.421912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.421943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.422232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.422262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.422489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.422536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.422744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.422777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.423061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.423098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.423387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.423418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.423703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.423736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.423933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.423964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.424223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.424254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.424475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.424498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.424676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.424693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.424945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.424976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.425205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.425236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.425496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.425513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.425739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.425756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.425960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.425991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.426247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.426279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.426474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.426496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.426678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.426696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.426880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.426898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.427077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.427094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.427217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.427249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.427531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.587 [2024-07-24 18:22:17.427564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.587 qpair failed and we were unable to recover it. 00:27:24.587 [2024-07-24 18:22:17.427827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.427858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.428140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.428172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.428374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.428405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.428677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.428695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.428946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.428977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.429207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.429239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.429430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.429462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.429785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.429859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.430142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.430188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.430536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.430570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.430784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.430816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.431048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.431080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.431286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.431318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.431516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.431549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.431749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.431780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.431997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.432029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.432229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.432244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.432382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.432394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.432630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.432663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.432882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.432913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.433046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.433077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.433306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.433318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.433551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.433563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.433829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.433869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.434085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.434117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.434348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.434380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.434601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.434613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.434908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.434940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.435195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.435226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.435483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.435502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.435761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.435773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.435894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.435925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.436150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.436182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.436488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.436537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.436692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.436724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.436925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.436999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.437260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.437280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.437541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.588 [2024-07-24 18:22:17.437576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.588 qpair failed and we were unable to recover it. 00:27:24.588 [2024-07-24 18:22:17.437822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.437854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.438065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.438097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.438310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.438343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.438656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.438688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.438977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.439009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.439253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.439284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.439578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.439612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.439808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.439840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.440053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.440085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.440324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.440356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.440641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.440685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.440864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.440896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.441141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.441173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.441319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.441337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.441557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.441575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.441687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.441702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.441788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.441804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.441934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.441951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.442134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.442149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.442337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.442348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.442447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.442458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.442618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.442651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.442847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.442879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.443125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.443157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.443456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.443468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.443643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.443676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.443949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.443981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.444330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.444362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.444575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.444589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.444712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.444744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.444910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.444941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.445205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.589 [2024-07-24 18:22:17.445237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.589 qpair failed and we were unable to recover it. 00:27:24.589 [2024-07-24 18:22:17.445504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.445536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.445689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.445721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.445990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.446022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.446304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.446316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.446553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.446586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.446797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.446829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.447044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.447077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.447225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.447257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.447405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.447437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.447724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.447756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.447902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.447934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.448249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.448281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.448507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.448539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.448739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.448771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.448981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.449013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.449241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.449273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.449551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.449564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.449734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.449745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.449906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.449944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.450167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.450198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.450518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.450551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.450762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.450794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.450997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.451030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.451327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.451339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.451617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.451630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.451758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.451771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.451873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.451884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.452054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.452086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.452383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.452415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.452705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.452737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.453006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.453039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.453365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.453396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.453551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.453586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.453784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.453816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.453984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.454025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.454304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.454336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.454547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.454582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.454848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.590 [2024-07-24 18:22:17.454880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.590 qpair failed and we were unable to recover it. 00:27:24.590 [2024-07-24 18:22:17.455151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.455184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.455335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.455373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.455621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.455634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.455824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.455836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.456036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.456068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.456357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.456389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.456527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.456540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.456706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.456721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.456951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.456983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.457215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.457248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.457533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.457545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.457706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.457737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.457944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.457976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.458242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.458274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.458428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.458460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.458666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.458699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.458913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.458946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.459156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.459169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.459288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.459320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.459463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.459510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.459822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.459854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.460026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.460059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.460347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.460379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.460542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.460576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.460744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.460776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.460930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.460963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.461161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.461193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.461407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.461419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.461525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.461537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.461655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.461668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.462501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.462531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.462846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.462858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.463019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.463030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.463136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.463176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.463382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.463415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.463685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.463719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.464008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.464040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.464238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.591 [2024-07-24 18:22:17.464250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.591 qpair failed and we were unable to recover it. 00:27:24.591 [2024-07-24 18:22:17.464417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.464451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.464633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.464668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.465002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.465034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.465169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.465202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.465452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.465483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.465628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.465639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.465739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.465750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.465921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.465932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.466098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.466129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.466295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.466333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.466535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.466567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.466765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.466797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.467001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.467033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.467189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.467200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.467274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.467285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.467454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.467486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.467640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.467671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.467822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.467853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.468067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.468101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.468297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.468329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.468467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.468477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.468614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.468626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.468777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.468789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.468968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.469000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.469138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.469171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.469294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.469325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.469470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.469481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.469647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.469659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.469760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.469771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.469873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.469884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.470111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.470123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.470290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.470301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.470399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.470411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.470572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.470584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.470763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.470774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.470877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.470888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.471131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.471142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.592 [2024-07-24 18:22:17.471256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.592 [2024-07-24 18:22:17.471293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.592 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.471512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.471545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.471720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.471752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.471975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.472006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.472206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.472237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.472386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.472425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.472582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.472594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.472702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.472713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.472946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.472956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.473070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.473081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.473174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.473184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.473277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.473287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.473367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.473380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.473471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.473482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.473662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.473673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.473832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.473863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.474086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.474118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.474261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.474294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.474432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.474442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.474609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.474622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.474724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.474735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.474822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.474833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.474904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.474914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.475067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.475078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.475188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.475199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.475355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.475366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.475471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.475513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.475708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.475740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.475860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.475891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.476099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.476131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.476401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.476434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.476599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.476611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.476845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.476856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.476949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.476960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.477062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.477072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.477170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.477180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.477356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.477366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.477591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.477623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.593 [2024-07-24 18:22:17.477762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.593 [2024-07-24 18:22:17.477794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.593 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.477940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.477971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.478090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.478121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.478422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.478454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.478681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.478714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.478928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.478959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.479091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.479124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.479245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.479275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.479426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.479437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.479704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.479716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.479874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.479905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.480053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.480085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.480284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.480315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.480530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.480541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.480723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.480759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.480970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.481001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.481208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.481240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.481430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.481460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.481637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.481670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.481850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.481881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.482018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.482048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.482205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.482247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.482401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.482412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.482505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.482515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.482748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.482759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.482905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.482916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.482997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.483007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.483089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.483099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.483188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.483198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.483347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.483357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.483437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.483449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.483542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.483553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.483680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.594 [2024-07-24 18:22:17.483711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.594 qpair failed and we were unable to recover it. 00:27:24.594 [2024-07-24 18:22:17.483928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.483961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.484091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.484121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.484322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.484333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.484530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.484563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.484703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.484734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.485018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.485048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.485248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.485259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.485485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.485529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.485791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.485822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.485966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.485999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.486204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.486237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.486443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.486474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.486666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.486679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.486823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.486834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.486993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.487098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.487218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.487372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.487501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.487615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.487797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.487962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.487975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.488932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.488942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.489132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.489175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.489361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.489391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.489530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.489562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.489761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.489812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.489938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.489968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.490092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.490122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.490277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.490322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.595 [2024-07-24 18:22:17.490409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.595 [2024-07-24 18:22:17.490420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.595 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.490500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.490511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.490673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.490684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.490851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.490881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.491963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.491974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.492110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.492120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.492224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.492236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.492399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.492410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.492565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.492576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.492737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.492748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.492857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.492867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.493964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.493974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.494066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.494076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.494172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.494182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.494328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.494338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.494504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.494515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.494735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.494765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.494892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.494923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.495068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.495099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.495217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.495227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.495387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.495397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.495485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.495501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.495626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.495637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.596 [2024-07-24 18:22:17.495723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.596 [2024-07-24 18:22:17.495734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.596 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.495818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.495830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.495991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.496002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.496084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.496094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.496317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.496348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.496538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.496570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.496767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.496799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.496920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.496951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.497214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.497245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.497407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.497417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.497526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.497537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.497612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.497622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.497704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.497715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.497873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.497904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.498029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.498060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.498249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.498280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.498418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.498428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.498516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.498527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.498606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.498616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.498791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.498801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.498896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.498907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.499979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.499989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.500064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.500074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.500328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.500339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.500513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.500525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.500691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.500702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.500847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.500858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.501002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.501013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.501249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.501260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.501430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.501461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.501670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.501703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.597 qpair failed and we were unable to recover it. 00:27:24.597 [2024-07-24 18:22:17.501962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.597 [2024-07-24 18:22:17.501993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.502251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.502261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.502424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.502435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.502629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.502640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.502828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.502838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.502934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.502944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.503136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.503147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.503395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.503405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.503634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.503645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.503807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.503817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.503981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.503992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.504076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.504086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.504305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.504316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.504513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.504545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.504805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.504836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.505108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.505140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.505432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.505473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.505630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.505640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.505879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.505890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.506062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.506094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.506281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.506292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.506442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.506452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.506623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.506634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.506806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.506837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.506982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.507012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.507269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.507300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.507601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.507611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.507790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.507800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.507966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.507978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.508088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.508098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.508266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.508278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.508575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.508608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.598 qpair failed and we were unable to recover it. 00:27:24.598 [2024-07-24 18:22:17.508817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.598 [2024-07-24 18:22:17.508848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.509108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.509139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.509345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.509376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.509510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.509542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.509705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.509715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.509885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.509896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.510002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.510013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.510196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.510206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.510323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.510334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.510412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.510422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.510653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.510664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.510896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.510927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.512221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.512249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.512527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.512541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.512719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.512730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.512919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.512929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.513042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.513053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.513257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.513289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.513535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.513567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.513800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.513831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.514032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.514063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.514325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.514363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.514479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.514489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.514685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.514696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.514811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.514822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.514955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.514965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.515075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.515086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.515278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.515309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.515590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.515621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.515811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.515822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.515945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.515956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.516158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.516168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.516274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.516284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.516527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.516538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.516656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.516667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.516839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.516850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.517030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.517043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.517154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.599 [2024-07-24 18:22:17.517165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.599 qpair failed and we were unable to recover it. 00:27:24.599 [2024-07-24 18:22:17.517277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.517288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.517448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.517458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.517674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.517685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.517796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.517806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.517966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.517976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.518175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.518186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.518347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.518358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.518544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.518555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.518725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.518735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.518907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.518918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.519049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.519058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.519352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.519362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.519536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.519547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.519779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.519810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.519959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.519990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.520270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.520302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.520596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.520607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.520709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.520719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.520842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.520853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.520961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.520971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.521164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.521175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.521287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.521298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.521564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.521575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.521734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.521744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.521909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.521941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.522138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.522210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.522510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.522553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.522747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.522765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.522902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.522918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.523040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.523056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.523243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.523259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.523423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.523439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.523632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.523647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.523786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.523802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.523920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.523936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.524046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.524062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.524243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.600 [2024-07-24 18:22:17.524258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.600 qpair failed and we were unable to recover it. 00:27:24.600 [2024-07-24 18:22:17.524429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.524445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.524701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.524716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.524891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.524908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.525045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.525076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.525351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.525381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.525526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.525571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.525678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.525694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.525881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.525896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.526026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.526042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.526251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.526267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.526526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.526543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.526660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.526675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.526846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.526862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.526973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.526988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.527206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.527222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.527350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.527369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.527552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.527568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.527688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.527704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.527809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.527824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.528001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.528016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.528223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.528240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.528432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.528463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.528745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.528777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.528925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.528956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.529250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.529280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.529569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.529585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.529709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.529724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.529845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.529860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.530069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.530084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.530325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.530341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.530528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.530544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.530704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.530720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.530841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.530856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.531068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.531083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.531255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.531270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.531379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.531395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.531570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.531586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.531728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.531744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.601 [2024-07-24 18:22:17.531921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.601 [2024-07-24 18:22:17.531937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.601 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.532068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.532084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.532252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.532267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.532429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.532446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.532619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.532638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.532814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.532829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.532929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.532944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.533119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.533135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.533309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.533324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.533567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.533584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.533812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.533827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.533995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.534011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.534116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.534133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.534369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.534384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.534631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.534646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.534839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.534855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.535035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.535051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.535246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.535277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.535552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.535585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.535799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.535829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.536028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.536059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.536271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.536302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.536534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.536550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.536675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.536690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.536817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.536833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.537106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.537121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.537398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.537413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.537556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.537573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.537825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.537840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.537970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.537985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.538206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.538223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.538450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.538468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.538657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.538673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.538800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.538815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.538924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.538940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.539180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.539195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.539403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.539418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.539592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.539608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.539873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.602 [2024-07-24 18:22:17.539889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.602 qpair failed and we were unable to recover it. 00:27:24.602 [2024-07-24 18:22:17.540009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.540025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.540221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.540236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.540505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.540537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.540753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.540783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.541011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.541041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.541274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.541305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.541591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.541607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.541833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.541848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.542028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.542044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.542262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.542293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.542594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.542626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.542833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.542864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.543076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.543107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.543365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.543396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.543625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.543641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.543836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.543851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.544039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.544055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.544252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.544283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.544522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.544554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.544715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.544746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.544893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.544923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.545088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.545118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.545383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.545398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.545631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.545646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.545828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.545844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.546022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.546038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.546295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.546310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.546423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.546438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.546605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.546621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.546753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.546768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.546938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.546953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.547162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.547177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.603 [2024-07-24 18:22:17.547410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.603 [2024-07-24 18:22:17.547426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.603 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.547669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.547685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.547866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.547882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.548043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.548059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.548162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.548178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.548425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.548440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.548651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.548666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.548838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.548853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.548968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.548984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.549220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.549236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.549418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.549433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.549660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.549676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.549787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.549803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.549928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.549943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.550115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.550131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.550358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.550379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.550558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.550575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.550715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.550729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.550907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.550922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.551157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.551172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.551280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.551295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.551487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.551522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.551707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.551722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.551878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.551894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.552060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.552075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.552259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.552275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.552523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.552539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.552717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.552732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.552898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.552917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.553082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.553097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.553359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.553374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.553551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.553567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.553791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.553807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.554000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.554015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.554127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.554142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.554417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.554432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.554685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.554700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.554821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.554836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.604 [2024-07-24 18:22:17.555087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.604 [2024-07-24 18:22:17.555118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.604 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.555374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.555404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.555664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.555680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.555865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.555880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.556109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.556124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.556329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.556345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.556627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.556643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.556831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.556847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.557025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.557040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.557152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.557167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.557303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.557318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.557505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.557520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.557623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.557639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.557809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.557824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.557983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.557998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.558295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.558310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.558517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.558532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.558728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.558747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.558858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.558873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.559053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.559069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.559192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.559207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.559432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.559447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.559632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.559649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.559862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.559877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.560042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.560057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.560172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.560188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.560311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.560326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.560564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.560580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.560744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.560759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.560869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.560885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.561017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.561032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.561347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.561363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.561484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.561506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.561675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.561690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.561870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.561901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.562208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.562239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.562496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.562512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.562625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.562640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.605 [2024-07-24 18:22:17.562865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.605 [2024-07-24 18:22:17.562881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.605 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.563141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.563172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.563330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.563361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.563564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.563596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.563809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.563825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.564061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.564076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.564329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.564347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.564514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.564529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.564727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.564742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.564897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.564912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.565038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.565068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.565268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.565299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.565559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.565592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.565791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.565821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.566048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.566079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.566299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.566315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.566580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.566595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.566772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.566787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.566963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.566978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.567097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.567113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.567307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.567329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.567617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.567651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.567793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.567823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.567949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.567978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.568232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.568263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.568407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.568436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.568661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.568676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.568846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.568862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.569027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.569042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.569355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.569385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.569668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.569700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.569928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.569957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.570108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.570138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.570336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.570357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.570556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.570572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.570752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.570767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.570878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.570894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.570993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.571008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.571288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.606 [2024-07-24 18:22:17.571329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.606 qpair failed and we were unable to recover it. 00:27:24.606 [2024-07-24 18:22:17.571608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.571640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.571874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.571903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.572051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.572081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.572308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.572339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.572467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.572506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.572688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.572703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.572896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.572927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.573240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.573270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.573522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.573553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.573736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.573751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.573935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.573965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.574092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.574121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.574330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.574360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.574629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.574660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.574813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.574828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.575051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.575082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.575296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.575329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.575528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.575560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.575766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.575781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.575922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.575951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.576264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.576293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.576564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.576596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.576815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.576845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.577032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.577062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.577275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.577306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.577635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.577651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.577859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.577888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.578057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.578088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.578365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.578398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.578560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.578576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.578754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.578769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.579008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.579037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.579176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.579206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.579419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.579449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.579684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.579702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.579832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.579847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.579967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.579982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.607 [2024-07-24 18:22:17.580087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.607 [2024-07-24 18:22:17.580102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.607 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.580265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.580280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.580401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.580416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.580591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.580607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.580700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.580715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.580841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.580856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.580981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.580996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.581213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.581228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.581480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.581501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.581614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.581629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.581735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.581749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.581911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.581926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.582201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.582231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.582484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.582531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.582818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.582833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.582952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.582967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.583192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.583207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.583461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.583502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.583793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.583823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.584077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.584106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.584315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.584345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.584569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.584613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.584789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.584804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.585051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.585066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.585257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.585335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.585586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.585622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.585915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.585947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.586108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.586139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.586353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.586383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.586545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.586577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.586779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.586789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.586952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.586962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.587165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.608 [2024-07-24 18:22:17.587175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.608 qpair failed and we were unable to recover it. 00:27:24.608 [2024-07-24 18:22:17.587342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.587352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.587590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.587601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.587817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.587828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.587951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.587960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.588225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.588239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.588480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.588495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.588689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.588700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.588886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.588895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.589010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.589020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.589250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.589259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.589350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.589360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.589523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.589533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.589699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.589709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.589920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.589930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.590191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.590201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.590409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.590420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.590624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.590635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.590827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.590837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.591051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.591062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.591354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.591363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.591526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.591537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.591659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.591669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.591870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.591880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.592001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.592011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.592104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.592114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.592373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.592383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.592540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.592550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.592710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.592720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.592910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.592919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.593074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.593084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.593316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.593346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.593550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.593619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.593854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.593888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.594137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.594169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.594387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.594417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.594605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.594621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.594806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.594821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.594922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.609 [2024-07-24 18:22:17.594936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.609 qpair failed and we were unable to recover it. 00:27:24.609 [2024-07-24 18:22:17.595114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.595129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.595383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.595414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.595622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.595653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.595866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.595897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.596051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.596081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.596225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.596255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.596531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.596562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.596837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.596852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.596968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.596983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.597181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.597197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.597319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.597334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.597445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.597457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.597645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.597655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.597757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.597767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.597922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.597932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.598077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.598108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.598327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.598358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.598570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.598603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.598794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.598804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.598965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.598975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.599075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.599085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.599178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.599188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.599361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.599371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.599474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.599484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.599686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.599696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.599800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.599810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.599986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.599996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.600102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.600112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.600370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.600380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.600626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.600636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.600824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.600834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.601085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.601095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.601260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.601270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.601429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.601441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.601698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.601708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.601797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.601806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.601975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.610 [2024-07-24 18:22:17.601985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.610 qpair failed and we were unable to recover it. 00:27:24.610 [2024-07-24 18:22:17.602150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.602160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.602364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.602384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.602557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.602568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.602818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.602828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.603022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.603031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.603293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.603303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.603477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.603487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.603593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.603603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.603760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.603770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.603935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.603964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.604228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.604258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.604475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.604513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.604657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.604667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.604769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.604779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.604942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.604951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.605047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.605056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.605154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.605164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.605250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.605260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.605411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.605420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.605527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.605537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.605646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.605656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.605862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.605873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.606129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.606139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.606301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.606311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.606559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.606591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.606849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.606879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.607190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.607220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.607507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.607539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.607768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.607778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.607885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.607895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.607990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.608000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.608222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.608232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.608330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.608340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.608509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.608519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.608700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.608709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.608820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.608830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.608932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.608943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.611 [2024-07-24 18:22:17.609049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.611 [2024-07-24 18:22:17.609059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.611 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.609272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.609282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.609547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.609557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.609723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.609732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.609885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.609895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.610050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.610060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.610241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.610250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.610351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.610360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.610575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.610586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.610744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.610775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.610908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.610938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.611182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.611230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.611457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.611487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.611719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.611761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.611914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.611924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.612027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.612036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.612313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.612323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.612507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.612517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.612767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.612777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.612890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.612900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.612990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.613000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.613230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.613239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.613334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.613344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.613499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.613510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.613742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.613751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.613856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.613866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.613969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.613979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.614246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.614256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.614446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.614456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.614614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.614624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.614722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.614732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.614842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.614852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.612 [2024-07-24 18:22:17.615028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.612 [2024-07-24 18:22:17.615057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.612 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.615263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.615293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.615502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.615534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.615723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.615733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.615897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.615907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.616082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.616091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.616345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.616354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.616531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.616541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.616734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.616744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.616865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.616874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.617040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.617051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.617211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.617221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.617315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.617324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.617501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.617511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.617695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.617705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.617869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.617898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.618169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.618199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.618502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.618512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.618686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.618695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.618778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.618787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.618995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.619005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.619221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.619232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.619324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.619334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.619429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.619439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.619646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.619657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.619769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.619779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.619989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.619999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.620294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.620323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.620624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.620656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.620926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.620937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.621020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.621029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.621188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.621198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.621435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.621445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.621526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.621536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.621637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.621648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.621745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.621756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.621989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.621998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.622223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.613 [2024-07-24 18:22:17.622233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.613 qpair failed and we were unable to recover it. 00:27:24.613 [2024-07-24 18:22:17.622394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.622404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.622632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.622642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.622809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.622819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.623058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.623068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.623235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.623265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.623513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.623544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.623734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.623764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.624046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.624077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.624274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.624303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.624501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.624511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.624741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.624751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.624827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.624837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.625047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.625057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.625288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.625298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.625476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.625486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.625674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.625684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.625930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.625960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.626189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.626219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.626473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.626517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.626773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.626803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.627011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.627020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.627177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.627187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.627404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.627414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.627580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.627590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.627747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.627757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.627857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.627867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.627966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.627976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.628061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.628071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.628272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.628282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.628366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.628376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.628618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.628628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.628728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.628738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.628968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.628978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.629139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.629148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.629409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.629418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.629570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.629580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.629762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.629798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.630004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.614 [2024-07-24 18:22:17.630034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.614 qpair failed and we were unable to recover it. 00:27:24.614 [2024-07-24 18:22:17.630332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.630363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.630636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.630667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.630873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.630903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.631041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.631051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.631314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.631324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.631557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.631583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.631729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.631739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.631913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.631923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.632237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.632267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.632547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.632579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.632780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.632810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.632959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.632988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.633120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.633150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.633410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.633440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.633742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.633774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.633976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.633986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.634154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.634164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.634360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.634370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.634567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.634577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.634723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.634733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.634891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.634901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.635159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.635168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.635380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.635390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.635647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.635657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.635753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.635763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.635917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.635927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.636033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.636042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.636208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.636218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.636455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.636465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.636600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.636611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.636719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.636728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.636808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.636817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.636907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.636916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.637076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.637086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.637186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.637196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.637382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.637391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.637576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.637586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.637748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.615 [2024-07-24 18:22:17.637758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.615 qpair failed and we were unable to recover it. 00:27:24.615 [2024-07-24 18:22:17.637910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.637922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.638161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.638171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.638383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.638393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.638481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.638494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.638598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.638608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.638765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.638775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.638933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.638942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.639019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.639028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.616 [2024-07-24 18:22:17.639255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.616 [2024-07-24 18:22:17.639265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.616 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.639472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.639483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.639667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.639678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.639884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.639894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.640087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.640099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.640256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.640266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.640411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.640422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.640598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.640609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.640780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.640791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.640958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.640969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.641136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.641147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.641368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.641379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.641541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.641554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.641725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.641737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.641946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.641958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.642078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.642089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.642245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.642256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.642486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.642501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.642634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.642656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.642813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.642823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.643052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.643061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.643225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.643235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.643387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.643397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.643543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.643553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.643746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.643756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.643858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.643867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.644024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.644034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.644203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.644213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.644395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.644404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.644561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.644574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.898 [2024-07-24 18:22:17.644684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.898 [2024-07-24 18:22:17.644694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.898 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.644853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.644863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.644955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.644966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.645072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.645082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.645233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.645243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.645489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.645503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.645606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.645617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.645763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.645773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.645877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.645887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.645987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.645998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.646076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.646086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.646317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.646326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.646553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.646564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.646666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.646676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.646832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.646843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.647011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.647021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.647249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.647260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.647464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.647474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.647633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.647644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.647834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.647844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.647944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.647953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.648034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.648043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.648221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.648230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.648434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.648444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.648538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.648548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.648649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.648659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.648808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.648818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.648971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.648981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.649175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.649185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.649289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.649299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.649436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.649446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.649584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.899 [2024-07-24 18:22:17.649594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.899 qpair failed and we were unable to recover it. 00:27:24.899 [2024-07-24 18:22:17.649695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.649705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.649912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.649922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.650078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.650088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.650303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.650313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.650485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.650500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.650585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.650595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.650760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.650770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.650925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.650936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.651092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.651102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.651308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.651318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.651548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.651561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.651690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.651699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.651787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.651796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.652003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.652013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.652103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.652112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.652274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.652284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.652423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.652432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.652645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.652656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.652760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.652770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.652912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.652922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.653022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.653031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.653268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.653278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.653435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.653445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.653545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.653556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.653659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.653670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.653773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.653784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.653890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.653899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.654007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.654017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.654105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.654115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.654222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.654231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.654310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.654320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.654510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.654521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.654632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.900 [2024-07-24 18:22:17.654642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.900 qpair failed and we were unable to recover it. 00:27:24.900 [2024-07-24 18:22:17.654746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.654756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.654961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.654972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.655228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.655238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.655393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.655403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.655550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.655562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.655676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.655686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.655782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.655793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.656002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.656012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.656208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.656219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.656325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.656335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.656570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.656582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.656762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.656772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.656856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.656867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.656963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.656973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.657192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.657203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.657378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.657388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.657594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.657605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.657744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.657757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.657860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.657870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.657980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.657991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.658087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.658097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.658252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.658262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.658500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.658511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.658722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.658732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.658890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.658900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.658995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.659006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.659249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.659259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.659342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.659351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.659528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.659538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.659636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.659645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.659814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.659824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.901 qpair failed and we were unable to recover it. 00:27:24.901 [2024-07-24 18:22:17.660004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.901 [2024-07-24 18:22:17.660014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.660179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.660189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.660407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.660418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.660574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.660586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.660817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.660828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.660923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.660933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.661034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.661045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.661212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.661223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.661389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.661400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.661597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.661608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.661751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.661761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.661868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.661879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.662029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.662039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.662189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.662198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.662350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.662361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.662541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.662551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.662730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.662740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.662853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.662863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.663048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.663057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.663145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.663155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.663359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.663369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.663526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.663536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.663717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.663727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.663878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.663888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.664000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.664011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.664184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.664194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.664446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.664458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.664535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.664545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.664646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.664657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.664755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.664765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.665042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.665053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.665205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.665215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.665392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.665401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.665554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.665565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.902 [2024-07-24 18:22:17.665724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.902 [2024-07-24 18:22:17.665733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.902 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.665836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.665846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.666065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.666075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.666160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.666169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.666414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.666444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.666669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.666701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.666866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.666898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.667095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.667104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.667341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.667372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.667571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.667603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.667798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.667827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.667977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.668006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.668315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.668346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.668623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.668655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.668883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.668913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.669101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.669111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.669275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.669305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.669561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.669593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.669792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.903 [2024-07-24 18:22:17.669822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.903 qpair failed and we were unable to recover it. 00:27:24.903 [2024-07-24 18:22:17.669947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.669956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.670102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.670112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.670347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.670357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.670619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.670651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.670849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.670880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.671022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.671032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.671279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.671310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.671509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.671540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.671677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.671707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.671906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.671917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.672072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.672103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.672241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.672271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.672508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.672540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.672739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.672751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.672851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.672860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.672965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.672974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.673064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.673074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.673216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.673226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.673306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.673315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.673393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.673402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.673563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.673573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.673729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.673739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.673908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.673938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.674086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.674116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.674307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.674336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.674627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.674657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.674791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.674816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.674983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.904 [2024-07-24 18:22:17.674994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.904 qpair failed and we were unable to recover it. 00:27:24.904 [2024-07-24 18:22:17.675165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.675176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.675355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.675385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.675528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.675558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.675825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.675856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.676002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.676012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.676128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.676137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.676231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.676241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.676502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.676535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.676676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.676705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.676929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.676958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.677241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.677273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.677468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.677510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.677665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.677695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.677944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.677973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.678273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.678302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.678540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.678574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.678787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.678818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.678935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.678977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.679153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.679163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.679338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.679347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.679451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.679461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.679637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.679648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.679751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.679760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.679919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.679928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.680285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.680315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.680462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.680508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.680712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.680742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.680963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.680994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.681138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.681170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.681370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.681399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.681670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.905 [2024-07-24 18:22:17.681681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.905 qpair failed and we were unable to recover it. 00:27:24.905 [2024-07-24 18:22:17.681778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.681787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.681886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.681896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.682012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.682023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.682214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.682224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.682376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.682385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.682633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.682666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.682877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.682907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.683052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.683082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.683239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.683270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.683475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.683517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.683771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.683800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.683935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.683966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.684167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.684198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.684400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.684431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.684660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.684692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.684881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.685056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.685087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.685368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.685399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.685591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.685601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.685708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.685718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.685865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.685874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.686036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.686046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.686138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.686148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.686353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.686385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.686641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.686674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.686820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.686830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.686927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.686937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.906 qpair failed and we were unable to recover it. 00:27:24.906 [2024-07-24 18:22:17.687104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.906 [2024-07-24 18:22:17.687114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.687281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.687312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.687589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.687621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.687813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.687843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.688029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.688039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.688329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.688359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.688590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.688621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.688750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.688762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.688871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.688881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.688993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.689002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.689193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.689223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.689428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.689458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.689822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.689855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.689994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.690023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.690304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.690335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.690463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.690524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.690662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.690692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.690944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.690974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.691258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.691289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.691516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.691548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.691810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.691840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.691974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.691984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.692103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.692113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.692360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.692369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.692540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.692561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.692680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.692712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.692922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.692953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.693191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.693222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.907 [2024-07-24 18:22:17.693437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.907 [2024-07-24 18:22:17.693468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.907 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.693613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.693645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.693803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.693835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.694013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.694024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.694273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.694283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.694380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.694390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.694544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.694554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.694650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.694660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.694820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.694830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.694995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.695025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.695231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.695262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.695520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.695555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.695730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.695739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.695840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.695849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.695998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.696009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.696225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.696256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.696439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.696470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.696682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.696713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.696871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.696901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.697049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.697084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.697291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.697322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.697543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.697575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.697773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.697804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.697937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.697967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.698104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.698134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.698319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.698349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.698628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.698661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.908 [2024-07-24 18:22:17.698783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.908 [2024-07-24 18:22:17.698792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.908 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.699037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.699068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.699299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.699330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.699539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.699570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.699787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.699796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.699979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.699989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.700095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.700105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.700244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.700254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.700454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.700463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.700697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.700707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.700938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.700948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.701178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.701188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.701362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.701372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.701447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.701457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.701616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.701627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.701713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.701722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.701879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.701890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.702045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.702055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.702274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.702284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.702500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.702511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.909 [2024-07-24 18:22:17.702671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.909 [2024-07-24 18:22:17.702681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.909 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.702774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.702784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.702994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.703003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.703153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.703163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.703344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.703354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.703475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.703484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.703570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.703581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.703799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.703808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.703918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.703928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.704016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.704026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.704190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.704200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.704315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.704325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.704474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.704486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.704658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.704668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.704772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.704783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.704879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.704889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.705084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.705094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.705250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.705261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.705460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.705470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.705700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.705710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.705803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.705813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.705900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.705911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.706064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.706074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.706251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.706261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.706417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.706427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.706582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.706592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.706790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.706800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.707048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.707059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.910 [2024-07-24 18:22:17.707269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.910 [2024-07-24 18:22:17.707278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.910 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.707521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.707532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.707686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.707695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.707940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.707949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.708114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.708124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.708318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.708330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.708495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.708505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.708664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.708674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.708771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.708781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.708933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.708942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.709099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.709109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.709196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.709205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.709400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.709410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.709616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.709626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.709796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.709806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.709892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.709901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.709988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.709998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.710195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.710206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.710359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.710369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.710639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.710650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.710846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.710856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.711013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.711022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.711186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.711196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.711350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.711359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.711521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.711530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.711604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.711614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.711762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.711771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.712035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.712046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.712315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.712325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.712477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.712486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.911 qpair failed and we were unable to recover it. 00:27:24.911 [2024-07-24 18:22:17.712716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.911 [2024-07-24 18:22:17.712726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.712819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.712828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.712991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.713000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.713161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.713171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.713325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.713335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.713488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.713503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.713584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.713594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.713753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.713764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.713904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.713914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.714066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.714076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.714170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.714179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.714277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.714286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.714439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.714449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.714624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.714634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.714806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.714816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.715041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.715052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.715342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.715353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.715583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.715594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.715754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.715765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.715867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.715877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.716081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.716091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.716333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.716344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.716468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.716477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.716586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.716595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.716757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.716767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.716915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.716924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.717076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.717087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.912 [2024-07-24 18:22:17.717275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.912 [2024-07-24 18:22:17.717285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.912 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.717441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.717451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.717607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.717618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.717693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.717702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.717849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.717859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.718023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.718034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.718125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.718134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.718210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.718219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.718434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.718444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.718554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.718565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.718669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.718678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.718828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.718838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.719013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.719023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.719115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.719126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.719261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.719270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.719465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.719475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.719648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.719658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.719839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.719849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.719992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.720002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.720257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.720267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.720356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.720366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.720621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.720631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.720824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.720834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.720983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.720993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.721154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.721165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.721372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.721382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.721612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.721623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.721711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.721720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.721828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.721838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.721930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.721940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.722041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.722051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.913 qpair failed and we were unable to recover it. 00:27:24.913 [2024-07-24 18:22:17.722221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.913 [2024-07-24 18:22:17.722231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.722389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.722398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.722541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.722551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.722646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.722659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.722808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.722818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.722912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.722922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.723030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.723040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.723134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.723144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.723379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.723409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.723593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.723624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.723825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.723854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.723990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.723999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.724088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.724098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.724262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.724272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.724374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.724410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.724666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.724697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.724841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.724870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.725051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.725062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.725301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.725332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.725580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.725612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.725827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.725836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.725999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.726031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.726234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.726266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.726533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.726564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.726773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.726804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.726938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.914 [2024-07-24 18:22:17.726947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.914 qpair failed and we were unable to recover it. 00:27:24.914 [2024-07-24 18:22:17.727207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.727237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.727452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.727483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.727639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.727670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.727824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.727833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.727937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.727948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.728099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.728109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.728353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.728383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.728522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.728553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.728700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.728729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.728936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.728946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.729069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.729098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.729324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.729353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.729628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.729660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.729807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.729816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.729911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.729921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.730032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.730042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.730138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.730147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.730245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.730256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.730415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.730425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.730654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.730687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.730897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.730929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.731082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.731092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.731334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.731344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.731528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.731538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.731712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.731722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.731823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.731865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.732053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.732085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.732356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.732386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.732542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.915 [2024-07-24 18:22:17.732574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.915 qpair failed and we were unable to recover it. 00:27:24.915 [2024-07-24 18:22:17.732788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.732818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.733017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.733046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.733355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.733386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.733517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.733548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.733767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.733797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.733944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.733954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.734065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.734075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.734380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.734410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.734732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.734763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.734914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.734945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.735182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.735192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.735348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.735358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.735450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.735459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.735703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.735713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.735799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.735809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.735921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.735931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.736156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.736166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.736307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.736316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.736418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.736428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.736609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.736619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.736790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.736819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.736969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.736999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.737148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.737179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.737404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.737434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.737601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.737631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.737836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.737865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.738071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.738102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.738396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.738426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.738628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.738665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.738865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.738895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.739043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.739074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.739213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.739222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.739382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.739392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.739626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.739637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.739793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.739803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.739909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.739937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.740102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.740132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.916 qpair failed and we were unable to recover it. 00:27:24.916 [2024-07-24 18:22:17.740359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.916 [2024-07-24 18:22:17.740390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.740572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.740604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.740855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.740885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.741022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.741051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.741247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.741277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.741506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.741537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.741682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.741712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.741923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.741954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.742088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.742118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.742292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.742301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.742560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.742592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.742793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.742822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.743015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.743056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.743339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.743348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.743448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.743457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.743626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.743657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.743837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.743867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.744000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.744031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.744265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.744295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.744556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.744588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.744756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.744766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.744993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.745024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.745240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.745271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.745401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.745432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.745640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.745672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.745918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.745928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.746029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.746039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.746301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.746310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.746528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.746538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.746688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.746697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.746873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.746904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.747047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.747083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.747360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.747391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.747674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.747706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.747933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.747964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.748102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.748111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.748218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.748227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.748385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.917 [2024-07-24 18:22:17.748394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.917 qpair failed and we were unable to recover it. 00:27:24.917 [2024-07-24 18:22:17.748533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.748542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.748639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.748648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.748718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.748728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.748839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.748849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.748932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.748941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.749928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.749937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.750165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.750175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.750262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.750271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.750360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.750369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.750524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.750534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.750701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.750710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.750790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.750800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.750900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.750909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.751936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.751945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.752029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.752039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.752114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.752123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.752214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.752224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.752297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.752306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.752472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.752513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.752709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.752745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.752892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.752922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.753064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.753073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.753161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.753170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.753241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.918 [2024-07-24 18:22:17.753250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.918 qpair failed and we were unable to recover it. 00:27:24.918 [2024-07-24 18:22:17.753398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.753407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.753561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.753571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.753713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.753722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.753868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.753878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.753970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.753980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.754071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.754081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.754179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.754189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.754290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.754300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.754391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.754400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.754551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.754562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.754774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.754805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.755001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.755032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.755173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.755204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.755456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.755486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.755684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.755715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.755852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.755883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.756084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.756114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.756225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.756234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.756394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.756403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.756485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.756501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.756597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.756607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.756850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.756880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.757080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.757111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.757235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.757266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.757371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.757380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.757454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.757464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.757552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.757562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.757768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.757778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.757933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.757963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.758084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.758114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.758237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.758268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.758546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.758578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.758729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.758759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.758952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.758983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.759180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.759189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.759289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.759300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.759388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.759398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.919 qpair failed and we were unable to recover it. 00:27:24.919 [2024-07-24 18:22:17.759471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.919 [2024-07-24 18:22:17.759481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.759575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.759585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.759688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.759698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.759840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.759849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.759920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.759930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.760945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.760955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.761033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.761042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.761217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.761248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.761370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.761400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.761540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.761572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.761702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.761733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.761865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.761895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.762007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.762036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.762286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.762296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.762442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.762452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.762619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.762629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.762706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.762716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.762870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.762880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.762995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.763031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.763267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.763301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.763421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.763451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.763675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.763708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.763907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.763938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.764083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.764114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.920 [2024-07-24 18:22:17.764303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.920 [2024-07-24 18:22:17.764318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.920 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.764414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.764428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.764607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.764623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.764912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.764942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.765148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.765178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.765371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.765401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.765584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.765614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.765805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.765837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.765988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.766018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.766217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.766231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.766339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.766354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.766524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.766539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.766705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.766720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.766885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.766900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.767059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.767090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.767230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.767260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.767466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.767519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.767650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.767680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.767934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.767965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.768233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.768265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.768477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.768518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.768643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.768677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.768807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.768839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.769000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.769030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.769211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.769240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.769414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.769423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.769507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.769517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.769690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.769700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.769845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.769855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.769962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.769972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.770927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.921 [2024-07-24 18:22:17.770937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.921 qpair failed and we were unable to recover it. 00:27:24.921 [2024-07-24 18:22:17.771035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.771141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.771295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.771389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.771485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.771638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.771809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.771956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.771966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.772975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.772984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.773989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.773999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.774949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.774960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.775041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.775051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.775126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.775136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.775208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.775218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.775306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.922 [2024-07-24 18:22:17.775316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.922 qpair failed and we were unable to recover it. 00:27:24.922 [2024-07-24 18:22:17.775472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.775482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.775573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.775583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.775667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.775677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.775758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.775768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.775851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.775861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.776886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.776895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.777973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.777983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.778976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.778986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.779071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.779082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.779177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.779187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.779359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.779370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.779455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.779464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.779623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.779633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.779728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.923 [2024-07-24 18:22:17.779738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.923 qpair failed and we were unable to recover it. 00:27:24.923 [2024-07-24 18:22:17.779823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.779833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.780963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.780972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.781918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.781992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.782922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.783005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.783015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.783165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.783177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.783346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.783357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.783448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.783458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.783545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.783555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.783701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.783711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.783868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.783878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.784023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.784033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.784117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.784127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.784201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.784210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.784350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.924 [2024-07-24 18:22:17.784360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.924 qpair failed and we were unable to recover it. 00:27:24.924 [2024-07-24 18:22:17.784511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.784521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.784613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.784623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.784761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.784771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.784854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.784864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.784951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.784961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.785914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.785925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.786062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.786072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.786329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.786339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.786428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.786438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.786590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.786601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.786690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.786700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.786841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.786851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.786934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.786944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.787154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.787164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.787316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.787326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.787438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.787448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.787533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.787543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.787631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.787641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.787778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.787788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.787868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.787878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.788031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.788041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.788111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.788121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.788333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.788343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.788503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.788515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.788666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.925 [2024-07-24 18:22:17.788676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.925 qpair failed and we were unable to recover it. 00:27:24.925 [2024-07-24 18:22:17.788761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.788771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.788874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.788884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.788977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.788987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.789930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.789941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.790038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.790048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.790212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.790222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.790451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.790461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.790537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.790547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.790633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.790644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.790734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.790744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.790919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.790929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.791944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.791955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.792954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.792964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.793048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.793059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.793206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.793217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.793297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.793307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.926 qpair failed and we were unable to recover it. 00:27:24.926 [2024-07-24 18:22:17.793447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.926 [2024-07-24 18:22:17.793458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.793572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.793586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.793729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.793738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.793892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.793903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.794117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.794146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.794348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.794377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.794561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.794593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.794899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.794930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.795128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.795157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.795422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.795433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.795533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.795544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.795634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.795645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.795884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.795915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.796028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.796058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.796266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.796296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.796534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.796567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.796875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.796905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.797043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.797073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.797277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.797287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.797383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.797393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.797500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.797510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.797742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.797772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.797965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.797996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.798119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.798150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.798338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.798348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.798441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.798452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.798614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.798625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.798783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.798793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.798936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.798946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.799067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.799097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.799298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.799328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.799445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.799475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.799701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.799732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.799888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.799920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.800047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.800077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.800312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.800322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.800477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.800487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.800589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.927 [2024-07-24 18:22:17.800599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.927 qpair failed and we were unable to recover it. 00:27:24.927 [2024-07-24 18:22:17.800749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.800759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.800944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.800975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.801177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.801207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.801399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.801434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.801613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.801645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.801842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.801872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.802050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.802081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.802220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.802229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.802410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.802420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.802523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.802549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.802640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.802651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.802802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.802833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.803014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.803043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.803234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.803264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.803451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.803461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.803554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.803566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.803640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.803651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.803752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.803762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.803857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.803869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.804054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.804064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.804143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.804152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.804300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.804309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.804471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.804481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.804620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.804630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.804708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.804718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.804924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.804935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.805022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.805032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.805191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.805201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.805288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.805297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.805529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.805539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.805697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.805708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.805803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.805813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.805901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.805911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.806069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.806079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.806150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.806159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.806245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.806254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.806323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.806332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.928 [2024-07-24 18:22:17.806414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.928 [2024-07-24 18:22:17.806423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.928 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.806501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.806511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.806590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.806599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.806681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.806690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.806829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.806839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.806974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.806984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.807843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.807853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.808088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.808110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.808194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.808203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.808298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.808307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.808391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.808400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.808506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.808516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.808748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.808778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.808919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.808949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.809079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.809108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.809318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.809347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.809537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.809568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.809764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.809793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.809906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.809916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.809986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.809996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.810074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.810082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.810151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.810161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.810309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.810318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.810396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.810406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.810479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.929 [2024-07-24 18:22:17.810488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.929 qpair failed and we were unable to recover it. 00:27:24.929 [2024-07-24 18:22:17.810600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.810610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.810695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.810706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.810777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.810786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.810860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.810870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.810963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.810974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.811128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.811136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.811204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.811214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.811378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.811387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.811466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.811476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.811723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.811733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.811954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.811964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.812961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.812971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.813983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.813992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.814907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.930 qpair failed and we were unable to recover it. 00:27:24.930 [2024-07-24 18:22:17.814991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.930 [2024-07-24 18:22:17.815000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.815949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.815958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.816975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.816985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.817957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.817966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.818954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.931 [2024-07-24 18:22:17.818964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.931 qpair failed and we were unable to recover it. 00:27:24.931 [2024-07-24 18:22:17.819039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.819191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.819303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.819417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.819522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.819612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.819695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.819798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.819807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.820887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.820897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.821855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.821863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.932 [2024-07-24 18:22:17.822848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.932 qpair failed and we were unable to recover it. 00:27:24.932 [2024-07-24 18:22:17.822985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.822995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.823987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.823997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.824944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.824953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.825859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.825997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.826007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.826094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.826103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.826175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.826187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.826340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.826350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.826453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.826462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.826534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.933 [2024-07-24 18:22:17.826543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.933 qpair failed and we were unable to recover it. 00:27:24.933 [2024-07-24 18:22:17.826612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.826622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.826706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.826716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.826788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.826796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.826869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.826877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.826962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.826971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.827953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.827962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.828922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.828997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.829005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.829098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.829109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.829186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.829197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.829346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.829356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.934 [2024-07-24 18:22:17.829444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.934 [2024-07-24 18:22:17.829454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.934 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.829600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.829611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.829701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.829711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.829782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.829791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.829883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.829895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.830947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.830957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.831921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.831931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.832922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.832931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.833083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.833093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.833182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.935 [2024-07-24 18:22:17.833192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.935 qpair failed and we were unable to recover it. 00:27:24.935 [2024-07-24 18:22:17.833335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.833345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.833417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.833426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.833518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.833529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.833616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.833625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.833776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.833786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.833854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.833863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.833944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.833953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.834854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.834994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.835895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.835905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.836012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.836171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.836317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.836394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.836575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.836672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.936 [2024-07-24 18:22:17.836768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.936 qpair failed and we were unable to recover it. 00:27:24.936 [2024-07-24 18:22:17.836912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.836922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.837983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.837994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.838879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.838888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.839942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.839952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.840027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.840036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.840124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.840135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.840232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.840242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.840411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.840421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.840519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.840530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.937 [2024-07-24 18:22:17.840606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.937 [2024-07-24 18:22:17.840616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.937 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.840760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.840770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.840860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.840870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.840962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.840972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.841957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.841967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.842089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.842125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.842243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.842279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.842404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.842440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.842541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.842552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.842706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.842716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.842810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.842820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.842971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.842981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.843797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.843828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.844969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.844979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.845057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.845067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.845153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.845162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.938 qpair failed and we were unable to recover it. 00:27:24.938 [2024-07-24 18:22:17.845238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.938 [2024-07-24 18:22:17.845247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.845389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.845399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.845513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.845524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.845625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.845635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.845706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.845716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.845866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.845897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.846110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.846140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.846250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.846280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.846396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.846406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.846565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.846575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.846750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.846759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.846908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.846917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.846995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.847977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.847986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.848930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.848940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.849023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.849033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.849106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.849115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.939 [2024-07-24 18:22:17.849184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.939 [2024-07-24 18:22:17.849193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.939 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.849952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.849962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.850953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.850965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.851982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.851992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.852978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.940 [2024-07-24 18:22:17.852987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.940 qpair failed and we were unable to recover it. 00:27:24.940 [2024-07-24 18:22:17.853055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.853962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.853972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.854913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.854922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.855874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.855883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.856951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.856961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.857034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.857043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.857189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.941 [2024-07-24 18:22:17.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.941 qpair failed and we were unable to recover it. 00:27:24.941 [2024-07-24 18:22:17.857296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.857399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.857478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.857578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.857697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.857777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.857870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.857967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.857976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.858970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.858979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.859914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.859924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.860876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.860886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.861036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.861046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.861123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.861132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.861276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.861286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.861438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.861448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.942 [2024-07-24 18:22:17.861528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.942 [2024-07-24 18:22:17.861539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.942 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.861633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.861643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.861723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.861745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.861828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.861837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.861915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.861926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.862901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.862990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.863921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.863931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.864896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.864906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.865111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.865122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.865192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.865202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.943 qpair failed and we were unable to recover it. 00:27:24.943 [2024-07-24 18:22:17.865280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.943 [2024-07-24 18:22:17.865289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.865362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.865373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.865540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.865551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.865657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.865667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.865744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.865754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.865842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.865853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.865953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.865963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.866941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.866951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.867939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.867949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.868110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.868120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.868195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.868205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.868345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.868355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.868436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.868447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.868529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.868540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.868620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.868629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.868864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.868875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.869080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.869090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.869299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.869310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.869387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.869397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.869542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.869553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.869704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.869714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.869797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.869807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.869966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.869976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.870075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.870086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.944 [2024-07-24 18:22:17.870170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.944 [2024-07-24 18:22:17.870180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.944 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.870272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.870283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.870423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.870434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.870577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.870588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.870689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.870710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.870798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.870814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.870896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.870911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.871949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.871960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.872945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.872955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.873948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.873958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.874104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.874115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.874193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.874203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.874352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.874363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.874505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.874516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.874588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.874598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.874757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.874768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.874928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.874939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.945 [2024-07-24 18:22:17.875024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.945 [2024-07-24 18:22:17.875034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.945 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.875904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.875915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.876921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.876932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.877929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.877995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.878005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.878243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.878254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.878406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.878417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.878506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.878517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.878601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.878611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.878770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.878806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.879066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.879096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.879361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.879371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.879523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.879534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.879743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.879754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.879910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.879920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.880015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.880026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.880111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.880122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.880215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.880225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.880309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.880320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.880421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.880432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.946 qpair failed and we were unable to recover it. 00:27:24.946 [2024-07-24 18:22:17.880602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.946 [2024-07-24 18:22:17.880635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.880793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.880839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.881061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.881094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.881358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.881374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.881563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.881579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.881746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.881761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.882007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.882022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.882173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.882187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.882305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.882319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.882484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.882506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.882677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.882691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.882845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.882859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.883015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.883027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.883189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.883200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.883359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.883369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.883455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.883466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.883619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.883630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.883773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.883783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.883951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.883962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.884098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.884108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.884209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.884218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.884313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.884323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.884489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.884505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.884697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.884706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.884892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.884902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.884989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.885081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.885277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.885462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.885568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.885681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.885766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.885871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.885882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.886089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.886100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.886175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.886185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.886330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.886341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.886523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.886534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.886625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.886635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.886701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.886711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.886886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.886896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.887063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.887074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.887154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.887166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.947 [2024-07-24 18:22:17.887255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.947 [2024-07-24 18:22:17.887266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.947 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.887338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.887348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.887436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.887446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.887553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.887564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.887659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.887670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.887764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.887775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.887979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.887990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.888139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.888150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.888244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.888255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.888398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.888409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.888511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.888522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.888672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.888683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.888758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.888769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.888847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.888858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.889947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.889958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.890120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.890131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.948 qpair failed and we were unable to recover it. 00:27:24.948 [2024-07-24 18:22:17.890230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.948 [2024-07-24 18:22:17.890241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.890325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.890335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.890499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.890510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.890678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.890689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.890863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.890873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.890948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.890958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.891968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.891978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.892163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.892176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.892341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.892352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.892440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.892451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.892530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.892541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.892686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.892697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.892903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.892914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.893924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.893995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.894005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.894157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.894168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.894257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.894268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.894343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.894353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.894500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.894510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.894613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.894623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.949 qpair failed and we were unable to recover it. 00:27:24.949 [2024-07-24 18:22:17.894725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.949 [2024-07-24 18:22:17.894736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.894886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.894896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.894978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.894988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.895063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.895073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.895161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.895171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.895260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.895270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.895357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.895374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.895481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.895503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.895661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.895676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.895822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.895837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.896008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.896022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.896118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.896133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.896230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.896245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.896422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.896437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.896606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.896622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.896737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.896751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.896931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.896946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.897117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.897131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.897243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.897255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.897463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.897473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.897568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.897579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.897652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.897664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.897825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.897835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.897996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.898932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.898942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.899036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.899046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.899200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.899210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.899361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.899371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.899516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.899527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.899628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.899638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.950 [2024-07-24 18:22:17.899727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.950 [2024-07-24 18:22:17.899736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.950 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.899885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.899895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.900035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.900045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.900126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.900137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.900219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.900229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.900380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.900390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.900663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.900694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.900822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.900852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.901063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.901094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.901281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.901293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.901383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.901393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.901473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.901483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.901644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.901654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.901861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.901888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.901996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.902026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.902139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.902169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.902352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.902382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.902571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.902581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.902721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.902731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.902933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.902943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.903032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.903042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.903141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.903151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.903305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.903315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.903462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.903472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.903548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.903558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.903789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.903799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.903953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.903963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.904103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.904113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.904273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.904284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.904435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.904445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.904655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.904666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.904903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.904913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.905067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.905078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.905150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.905160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.905305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.905315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.905391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.905401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.905573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.905584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.905659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.951 [2024-07-24 18:22:17.905669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.951 qpair failed and we were unable to recover it. 00:27:24.951 [2024-07-24 18:22:17.905762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.905772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.905869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.905880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.905966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.905976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.906049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.906058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.906314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.906344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.906467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.906505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.906704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.906733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.906865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.906892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.907024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.907052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.907185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.907214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.907488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.907530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.907696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.907707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.907915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.907925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.908965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.908974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.909111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.909122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.909269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.909279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.909509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.909519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.909601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.909610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.909703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.909712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.909870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.909879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.909975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.909985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.910086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.910095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.910166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.910175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.910385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.910394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.910470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.910479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.910583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.910593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.910745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.910754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.952 [2024-07-24 18:22:17.910846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.952 [2024-07-24 18:22:17.910856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.952 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.910946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.910954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.911884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.911893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.912057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.912066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.912224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.912233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.912329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.912338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.912514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.912525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.912708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.912718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.912804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.912815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.912960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.912970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.913931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.913940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.953 [2024-07-24 18:22:17.914892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.953 [2024-07-24 18:22:17.914901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.953 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.914992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.915001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.915223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.915254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.915393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.915423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.915548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.915580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.915695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.915725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.915919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.915948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.916131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.916161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.916350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.916381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.916516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.916557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.916651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.916660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.916745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.916754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.916958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.916968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.917063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.917072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.917275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.917284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.917360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.917369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.917513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.917523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.917671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.917681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.917948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.917957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.918022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.918031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.918198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.918227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.918356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.918386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.918587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.918618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.918812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.918841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.918989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.919028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.919240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.919269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.919427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.919456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.919632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.919642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.919838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.919848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.919983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.919992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.920072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.920082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.920166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.920175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.920334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.920344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.920430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.920440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.920605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.920615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.920710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.920720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.920870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.920898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.921035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.954 [2024-07-24 18:22:17.921065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.954 qpair failed and we were unable to recover it. 00:27:24.954 [2024-07-24 18:22:17.921268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.921298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.921530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.921564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.921746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.921777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.921978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.922007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.922204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.922234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.922419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.922449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.922594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.922604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.922694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.922703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.922866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.922875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.922955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.922965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.923110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.923119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.923264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.923274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.923354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.923363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.923433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.923442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.923599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.923608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.923840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.923850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.924971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.924980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.925185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.925195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.925287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.925298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.925382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.925391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.925474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.925483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.925641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.925651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.925741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.925750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.925839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.925848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.926026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.926035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.926122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.926132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.926200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.926209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.926355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.926365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.926434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.926443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.955 [2024-07-24 18:22:17.926600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.955 [2024-07-24 18:22:17.926609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.955 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.926776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.926806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.926933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.926962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.927104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.927134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.927261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.927292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.927494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.927504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.927643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.927653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.927734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.927743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.927949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.927959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.928043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.928052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.928129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.928139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.928296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.928305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.928383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.928392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.928528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.928538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.928774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.928783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.928939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.928949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.929149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.929158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.929263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.929272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.929510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.929520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.929608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.929618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.929706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.929715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.929887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.929896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.929999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.930009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.930158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.930167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.930261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.930270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.930410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.930420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.930517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.930527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.930672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.930702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.930902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.930932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.931057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.931092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.931302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.931312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.931469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.931478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.931555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.931565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.931722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.931731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.931910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.931920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.932062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.932071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.932171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.932181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.932255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.932265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.956 qpair failed and we were unable to recover it. 00:27:24.956 [2024-07-24 18:22:17.932368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.956 [2024-07-24 18:22:17.932377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.932463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.932472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.932569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.932578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.932674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.932684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.932769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.932779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.932855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.932864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.932947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.932956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.933112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.933122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.933276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.933285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.933378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.933388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.933480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.933503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.933736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.933746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.933812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.933822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.933979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.933989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.934132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.934142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.934288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.934297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.934452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.934461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.934638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.934648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.934796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.934806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.934878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.934888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.935064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.935074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.935154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.935163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.935248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.935257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.935417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.935427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.935570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.935580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.935720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.935729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.935829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.935839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.936004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.936013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.936165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.936174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.936256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.936265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.936350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.936360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.936524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.957 [2024-07-24 18:22:17.936536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.957 qpair failed and we were unable to recover it. 00:27:24.957 [2024-07-24 18:22:17.936622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.936632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.936713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.936723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.936883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.936893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.937044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.937053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.937196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.937206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.937412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.937422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.937648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.937657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.937746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.937755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.937827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.937836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.937918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.937928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.938012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.938156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.938333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.938448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.938556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.938772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.938865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.938998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.939096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.939199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.939291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.939454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.939608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.939714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.939874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.939883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.940025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.940035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.940173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.940183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.940445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.940475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.940634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.940665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.940880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.940909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.941039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.941068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.941269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.941298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.941508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.941539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.941721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.941731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.941875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.941903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.942079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.942109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.942327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.942356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.958 qpair failed and we were unable to recover it. 00:27:24.958 [2024-07-24 18:22:17.942535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.958 [2024-07-24 18:22:17.942545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.942723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.942733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.942824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.942836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.943005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.943015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.943251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.943281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.943528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.943558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.943818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.943848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.944038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.944068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.944264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.944293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.944489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.944530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.944732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.944762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.944972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.945004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.945210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.945240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.945520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.945551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.945706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.945716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.945795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.945805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.945961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.945970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.946131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.946140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.946278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.946288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.946451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.946481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.946631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.946662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.946780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.946810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.947031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.947061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.947330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.947360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.947621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.947631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.947814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.947823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.947978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.948017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.948150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.948180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.948321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.948351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.948600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.948631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.948742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.948758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.948929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.948944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.949099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.949113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.949224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.949257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.949380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.949410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.949664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.949696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.949914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.949944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.950137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.959 [2024-07-24 18:22:17.950166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.959 qpair failed and we were unable to recover it. 00:27:24.959 [2024-07-24 18:22:17.950466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.950504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.950646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.950661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.950920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.950950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.951146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.951176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.951480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.951527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.951659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.951689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.951910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.951940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.952128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.952158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.952430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.952461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.952711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.952726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.952901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.952916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.953145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.953176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.953396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.953426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.953574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.953618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.953837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.953852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.954018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.954033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.954189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.954204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.954305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.954320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.954503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.954515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.954679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.954709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.954892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.954922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.955147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.955176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.955301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.955321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.955467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.955477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.955576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.955586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.955670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.955679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.955826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.955836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.955930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.955939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.956113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.956123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.956259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.956269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.956361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.956371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.956508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.956546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.956778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.956794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.956889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.956904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.957052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.957063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.957163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.957173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.957244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.960 [2024-07-24 18:22:17.957253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.960 qpair failed and we were unable to recover it. 00:27:24.960 [2024-07-24 18:22:17.957400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.957410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.957615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.957624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.957769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.957779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.957917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.957927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.958965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.958974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.959120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.959130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.959216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.959224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.959297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.959306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.959442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.959451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.959656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.959666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.959758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.959768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.959907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.959917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.960908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.960917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.961067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.961076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.961213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.961222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.961401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.961410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.961565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.961575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.961780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.961789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.961860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.961869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.962027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.962038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.962125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.962134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.962313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.962323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.962571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.961 [2024-07-24 18:22:17.962581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.961 qpair failed and we were unable to recover it. 00:27:24.961 [2024-07-24 18:22:17.962685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.962695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.962772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.962781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.962858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.962867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.963005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.963015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.963161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.963170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.963338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.963347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.963496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.963505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.963655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.963664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.963827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.963836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.963934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.963944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.964017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.964026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.964109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.964118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.964199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.964207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.964413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.964423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.964583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.964593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.964801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.964810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:24.962 [2024-07-24 18:22:17.964883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.962 [2024-07-24 18:22:17.964892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:24.962 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.965872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.965881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.966020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.966029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.966231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.966240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.966413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.966422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.966567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.966577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.966727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.966736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.966904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.966913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.967063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.967072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.967260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.967269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.967428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.967438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.967582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.967591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.967753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.967762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.967916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.967927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.968924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.968934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.969073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.969083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.969181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.969191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.969273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.969282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.969377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.969386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.248 qpair failed and we were unable to recover it. 00:27:25.248 [2024-07-24 18:22:17.969475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.248 [2024-07-24 18:22:17.969485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.969634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.969644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.969796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.969806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.969965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.969975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.970950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.970959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.971107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.971117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.971296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.971306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.971380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.971390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.971543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.971553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.971699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.971709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.971795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.971804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.971888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.971898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.972933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.972942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.973083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.973092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.973172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.973181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.973319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.973329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.249 qpair failed and we were unable to recover it. 00:27:25.249 [2024-07-24 18:22:17.973417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.249 [2024-07-24 18:22:17.973427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.973518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.973527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.973702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.973712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.973797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.973806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.973953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.973962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.974183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.974193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.974272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.974281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.974353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.974361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.974571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.974581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.974745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.974775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.974993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.975022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.975317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.975347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.975606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.975616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.975778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.975787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.976001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.976030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.976211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.976241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.976425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.976455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.976741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.976772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.976903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.976932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.977051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.977080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.977181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.977211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.977346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.977376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.977565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.977597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.977799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.977828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.978020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.978049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.978231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.978261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.978480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.978520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.978716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.978746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.978961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.978991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.979137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.979167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.979468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.979507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.979649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.979679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.979827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.979857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.250 qpair failed and we were unable to recover it. 00:27:25.250 [2024-07-24 18:22:17.979999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.250 [2024-07-24 18:22:17.980029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.980213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.980243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.980512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.980556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.980712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.980722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.980892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.980901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.980996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.981973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.981982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.982057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.982066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.982226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.982235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.982410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.982419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.982586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.982596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.982773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.982783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.982933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.982942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.983166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.983176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.983274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.983284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.983361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.983371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.983506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.983516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.983719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.983748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.983999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.984029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.984207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.984237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.984385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.984394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.984538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.984549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.984734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.984763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.984896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.984925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.985108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.985137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.985328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.985357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.985559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.985591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.985803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.985813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.986047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.251 [2024-07-24 18:22:17.986076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.251 qpair failed and we were unable to recover it. 00:27:25.251 [2024-07-24 18:22:17.986268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.986297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.986476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.986515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.986707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.986716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.986942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.986952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.987034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.987044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.987180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.987189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.987344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.987356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.987447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.987456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.987677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.987708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.987829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.987859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.988039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.988069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.988326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.988355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.988557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.988588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.988734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.988763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.988902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.988931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.989123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.989152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.989363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.989393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.989601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.989611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.989823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.989853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.990126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.990155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.990381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.990410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.990670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.990680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.990839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.990849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.252 [2024-07-24 18:22:17.991907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.252 [2024-07-24 18:22:17.991916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.252 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.992099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.992128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.992403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.992433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.992723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.992733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.992883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.992892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.993032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.993042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.993186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.993216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.993359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.993389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.993585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.993616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.993809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.993838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.994087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.994117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.994244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.994273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.994408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.994438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.994563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.994573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.994814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.994824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.994915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.994923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.995089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.995122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.995388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.995418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.995758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.995788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.996011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.996040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.996179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.996208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.996463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.996502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.996653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.996663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.996821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.996831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.997037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.997067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.997193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.997222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.997414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.997444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.997670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.997701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.997889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.997909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.253 qpair failed and we were unable to recover it. 00:27:25.253 [2024-07-24 18:22:17.997991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.253 [2024-07-24 18:22:17.998000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.998139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.998149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.998320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.998358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.998613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.998645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.998786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.998815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.999015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.999045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.999244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.999274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.999419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.999448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.999664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.999674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.999761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.999770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:17.999917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:17.999927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.000004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.000013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.000239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.000248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.000409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.000418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.000517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.000527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.000633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.000643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.000872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.000881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.001035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.001044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.001133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.001141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.001227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.001236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.001395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.001405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.001477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.001486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.001644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.001680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.001936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.001965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.002159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.002188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.002334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.002364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.002682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.002692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.002853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.002864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.003046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.003076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.003286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.254 [2024-07-24 18:22:18.003314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.254 qpair failed and we were unable to recover it. 00:27:25.254 [2024-07-24 18:22:18.003462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.003500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.003639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.003649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.003803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.003812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.004040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.004050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.004241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.004251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.004487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.004527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.004743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.004773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.004910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.004938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.005133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.005163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.005345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.005375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.005565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.005596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.005786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.005816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.005996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.006025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.006159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.006189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.006393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.006427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.006531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.006540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.006627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.006636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.006789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.006798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.006947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.006957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.007909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.007996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.008920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.008929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.255 qpair failed and we were unable to recover it. 00:27:25.255 [2024-07-24 18:22:18.009087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.255 [2024-07-24 18:22:18.009098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.009277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.009287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.009510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.009521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.009682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.009692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.009842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.009872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.010086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.010115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.010335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.010365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.010548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.010580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.010717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.010747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.010990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.011000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.011173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.011182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.011261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.011270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.011474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.011484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.011648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.011658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.011798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.011807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.012014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.012024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.012112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.012121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.012220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.012229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.012368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.012378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.012521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.012531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.012762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.012772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.012864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.012872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.013049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.013059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.013219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.013229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.013369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.013378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.013460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.013469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.013622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.013632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.013730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.013739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.013841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.013850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.014017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.256 [2024-07-24 18:22:18.014027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.256 qpair failed and we were unable to recover it. 00:27:25.256 [2024-07-24 18:22:18.014099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.014108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.014180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.014189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.014328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.014336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.014479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.014488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.014596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.014606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.014757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.014767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.014933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.014943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.015150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.015160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.015265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.015275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.015345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.015354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.015461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.015472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.015626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.015636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.015701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.015710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.015853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.015863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.016094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.016104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.016196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.016205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.016345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.016354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.016455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.016464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.016561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.016570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.016799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.016808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.016964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.016974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.017061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.017071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.017152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.017161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.017249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.017258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.017352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.017361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.017565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.017575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.017666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.017675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.017821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.017831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.018063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.018092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.018290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.018319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.018569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.018579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.018670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.018682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.018834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.018843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.018999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.019008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.019192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.019202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.257 qpair failed and we were unable to recover it. 00:27:25.257 [2024-07-24 18:22:18.019277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.257 [2024-07-24 18:22:18.019286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.019429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.019439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.019592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.019602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.019741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.019751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.019836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.019846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.019986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.019995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.020947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.020957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.021988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.021997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.022077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.022086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.022236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.022246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.022420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.022430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.022583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.022593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.022799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.022809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.022901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.022910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.023061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.023070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.258 qpair failed and we were unable to recover it. 00:27:25.258 [2024-07-24 18:22:18.023158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.258 [2024-07-24 18:22:18.023168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.023271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.023301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.023500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.023531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.023660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.023689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.023814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.023844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.023950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.023960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.024189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.024199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.024268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.024278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.024496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.024506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.024673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.024683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.024824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.024834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.024973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.025003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.025150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.025179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.025355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.025425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.025613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.025630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.025788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.025803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.025907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.025918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.026059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.026069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.026217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.026227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.026367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.026377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.026607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.026617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.026702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.026729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.026917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.026946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.027067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.027096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.027362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.027391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.027599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.027609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.027760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.027772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.027920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.027930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.028000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.028009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.028181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.028191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.028344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.028354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.028498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.028508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.028647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.028656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.028754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.028764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.028925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.028955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.029139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.029168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.029294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.029323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.029466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.259 [2024-07-24 18:22:18.029529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.259 qpair failed and we were unable to recover it. 00:27:25.259 [2024-07-24 18:22:18.029780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.029810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.030956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.030986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.031129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.031159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.031384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.031414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.031602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.031633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.031747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.031777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.031901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.031910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.032048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.032057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.032234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.032244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.032390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.032400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.032503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.032513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.032581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.032591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.032691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.032701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.032868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.032898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.033021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.033051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.033184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.033214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.033344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.033374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.033639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.033669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.033796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.033806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.033992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.034001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.034226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.034236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.034310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.034319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.034400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.034410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.034485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.034537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.034667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.034697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.034823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.034853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.035040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.035069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.035275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.035306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.035511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.035542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.035744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.035774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.035978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.260 [2024-07-24 18:22:18.035988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.260 qpair failed and we were unable to recover it. 00:27:25.260 [2024-07-24 18:22:18.036141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.036171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.036439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.036468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.036655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.036665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.036803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.036813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.036900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.036909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.037049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.037059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.037234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.037244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.037399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.037408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.037608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.037618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.037691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.037700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.037836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.037846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.037943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.037953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.038106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.038135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.038332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.038361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.038485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.038500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.038632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.038642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.038738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.038748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.038904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.038916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.038989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.038999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.039144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.039153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.039358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.039367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.039515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.039525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.039688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.039698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.039856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.039866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.039963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.039973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.040052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.040087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.040308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.040337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.040459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.040489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.040676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.040686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.040785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.040794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.040890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.040899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.040988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.040997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.041234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.041244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.041336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.041346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.041559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.041591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.041726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.041755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.041936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.041966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.261 [2024-07-24 18:22:18.042105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.261 [2024-07-24 18:22:18.042136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.261 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.042385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.042414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.042534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.042565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.042742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.042752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.042893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.042902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.042984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.042993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.043155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.043185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.043346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.043377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.043631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.043662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.043828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.043838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.043920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.043930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.044871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.044883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.045019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.045028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.045180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.045215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.045422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.045451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.045609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.045640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.045784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.045814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.045997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.046027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.046204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.046234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.046462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.046504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.046646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.046656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.046803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.046813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.046898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.046908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.046995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.047149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.047317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.047420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.047591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.047691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.047792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.047886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.047896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.262 qpair failed and we were unable to recover it. 00:27:25.262 [2024-07-24 18:22:18.048034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.262 [2024-07-24 18:22:18.048044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.048963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.048993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.049199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.049229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.049415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.049445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.049712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.049721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.049811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.049820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.049907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.049917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.049986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.049996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.050134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.050144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.050395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.050405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.050504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.050513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.050668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.050677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.050774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.050784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.050935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.050945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.051054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.051084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.051301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.051336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.051531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.051562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.051689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.051699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.051931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.051941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.052184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.052193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.052272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.052281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.052479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.052518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.052749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.052779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.052977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.053007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.263 [2024-07-24 18:22:18.053148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.263 [2024-07-24 18:22:18.053177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.263 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.053305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.053335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.053535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.053566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.053732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.053742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.053858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.053867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.053951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.053961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.054043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.054053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.054139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.054148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.054233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.054243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.054385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.054395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.054496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.054541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.054671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.054700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.054908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.054938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.055233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.055263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.055484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.055525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.055779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.055809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.055937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.055947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.056202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.056232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.056489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.056530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.056741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.056770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.056967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.056996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.057191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.057221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.057400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.057429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.057630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.057661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.057798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.057828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.058012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.058042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.058187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.058217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.058432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.058462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.058680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.058711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.058832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.058862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.059077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.059107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.059309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.059344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.059550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.059581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.059755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.059785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.059979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.060008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.060200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.060230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.060342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.060372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.060565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.060596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.264 [2024-07-24 18:22:18.060775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.264 [2024-07-24 18:22:18.060804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.264 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.061062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.061072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.061228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.061258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.061456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.061485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.061723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.061753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.061963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.061993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.062228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.062258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.062452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.062482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.062675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.062705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.062898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.062907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.063056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.063085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.063308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.063338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.063602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.063634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.063911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.063940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.064140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.064170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.064391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.064420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.064615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.064625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.064801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.064831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.065111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.065141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.065282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.065311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.065428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.065458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.065716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.065748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.065885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.065895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.065989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.065999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.066151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.066180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.066363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.066393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.066641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.066671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.066864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.066874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.067105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.067115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.067194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.067204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.067308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.067318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.067413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.067423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.067506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.067516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.067729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.067765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.067882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.067912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.068035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.068064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.068174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.068203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.068477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.068516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.265 qpair failed and we were unable to recover it. 00:27:25.265 [2024-07-24 18:22:18.068642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.265 [2024-07-24 18:22:18.068672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.068854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.068864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.069077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.069087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.069198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.069208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.069343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.069352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.069537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.069568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.069701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.069731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.069860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.069889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.069984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.069995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.070233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.070243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.070465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.070503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.070697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.070727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.071006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.071036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.071167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.071177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.071410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.071440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.071641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.071672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.071815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.071845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.072095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.072105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.072193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.072203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.072291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.072301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.072408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.072438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.072675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.072707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.072914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.072944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.073125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.073155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.073359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.073388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.073535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.073545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.073778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.073788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.073966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.073976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.074206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.074215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.074372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.074666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.074697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.074892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.074922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.075087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.075096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.075322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.075352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.075606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.075637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.075832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.075867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.076008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.076037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.076237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.266 [2024-07-24 18:22:18.076267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.266 qpair failed and we were unable to recover it. 00:27:25.266 [2024-07-24 18:22:18.076518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.076549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.076650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.076660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.076849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.076879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.077067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.077097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.077324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.077354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.077555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.077586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.077793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.077823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.078075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.078104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.078256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.078285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.078429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.078459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.078663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.078673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.078831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.078841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.079105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.079134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.079380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.079409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.079596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.079627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.079760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.079789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.080092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.080122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.080317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.080346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.080623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.080654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.080802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.080834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.080970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.081000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.081231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.081251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.081505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.081545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.081680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.081709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.081917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.081946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.082188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.082217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.082421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.082450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.082644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.082674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.082873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.082902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.083080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.083089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.083263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.267 [2024-07-24 18:22:18.083293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.267 qpair failed and we were unable to recover it. 00:27:25.267 [2024-07-24 18:22:18.083509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.083540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.083752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.083762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.084018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.084038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.084204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.084214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.084374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.084383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.084543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.084573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.084720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.084755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.084944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.084973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.085239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.085249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.085346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.085375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.085558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.085589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.085804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.085834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.086108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.086137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.086417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.086448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.086583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.086613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.086754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.086784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.086915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.086945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.087075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.087085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.087176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.087186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.087418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.087428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.087515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.087525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.087781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.087811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.088013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.088042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.088240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.088270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.088380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.088409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.088625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.088657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.088862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.088891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.089030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.089059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.089173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.089203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.089331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.089360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.089512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.089544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.089800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.089830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.090030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.090060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.090289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.090358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.090563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.090625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.090880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.090896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.090997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.091008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.268 [2024-07-24 18:22:18.091090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.268 [2024-07-24 18:22:18.091099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.268 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.091254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.091283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.091417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.091447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.091708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.091738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.091840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.091850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.092070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.092099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.092297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.092327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.092579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.092608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.092767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.092777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.092920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.092956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.093139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.093169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.093446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.093475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.093687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.093717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.093912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.093942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.094111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.094121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.094200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.094210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.094305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.094314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.094522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.094553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.094737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.094767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.095019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.095049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.095332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.095361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.095561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.095592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.095818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.095847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.095999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.096009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.096158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.096168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.096317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.096347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.096555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.096586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.096718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.096748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.097004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.097014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.097152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.097162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.097248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.097257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.097439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.097448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.097539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.097549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.097801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.097831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.097947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.097977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.098094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.098124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.098328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.098359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.098577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.098609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.269 qpair failed and we were unable to recover it. 00:27:25.269 [2024-07-24 18:22:18.098797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.269 [2024-07-24 18:22:18.098826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.098931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.098961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.099095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.099125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.099331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.099361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.099564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.099595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.099706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.099716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.099859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.099869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.100008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.100038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.100155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.100185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.100302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.100332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.100532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.100563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.100771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.100782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.100929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.100938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.101102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.101132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.101369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.101399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.101545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.101577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.101768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.101798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.102070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.102100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.102279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.102309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.102434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.102463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.102610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.102641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.102765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.102795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.103092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.103122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.103303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.103333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.103536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.103567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.103702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.103732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.103883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.103892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.104129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.104138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.104298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.104308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.104546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.104556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.104650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.104660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.104824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.104854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.104994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.105025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.105225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.105255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.105524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.105555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.105690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.105720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.105939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.105969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.106097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.270 [2024-07-24 18:22:18.106107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.270 qpair failed and we were unable to recover it. 00:27:25.270 [2024-07-24 18:22:18.106266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.106276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.106368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.106378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.106461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.106471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.106538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.106568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.106721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.106751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.106941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.106970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.107213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.107222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.107286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.107295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.107393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.107402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.107478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.107487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.107718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.107748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.107970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.108000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.108142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.108172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.108385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.108420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.108547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.108579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.108806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.108836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.109043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.109073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.109186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.109215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.109412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.109443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.109736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.109767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.109967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.109977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.110118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.110128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.110226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.110236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.110306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.110316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.110466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.110508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.110633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.110663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.110870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.110899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.111026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.111036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.111258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.111288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.111563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.111593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.111735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.111745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.111913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.111922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.112085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.112114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.112315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.112345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.112547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.112578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.112781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.112791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.112947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.112957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.113125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.271 [2024-07-24 18:22:18.113154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.271 qpair failed and we were unable to recover it. 00:27:25.271 [2024-07-24 18:22:18.113360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.113391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.113577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.113609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.113884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.113914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.114120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.114130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.114219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.114249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.114447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.114477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.114733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.114764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.114951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.114960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.115128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.115137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.115347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.115377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.115521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.115552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.115702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.115731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.115987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.116017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.116268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.116297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.116436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.116465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.116658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.116694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.116954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.116984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.117131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.117162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.117423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.117452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.117606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.117637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.117843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.118011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.118040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.118230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.118260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.118396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.118426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.118620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.118651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.118769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.118799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.119091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.119120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.119340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.119370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.119513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.119544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.119688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.119718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.119971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.120001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.120207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.272 [2024-07-24 18:22:18.120237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.272 qpair failed and we were unable to recover it. 00:27:25.272 [2024-07-24 18:22:18.120504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.120535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.120779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.120808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.121010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.121040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.121231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.121241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.121393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.121424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.121608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.121638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.121822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.121851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.122137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.122164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.122372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.122402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.122628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.122660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.122959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.122996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.123185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.123221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.123391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.123424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.123624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.123657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.123867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.123898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.124117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.124132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.124280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.124312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.124451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.124481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.124705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.124735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.124860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.124870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.125088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.125117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.125370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.125400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.125531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.125562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.125747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.125777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.126067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.126097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.126349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.126378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.126515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.126546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.126766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.126776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.126943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.126973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.127175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.127205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.127334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.127363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.127644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.127674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.127883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.127912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.128046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.128076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.128211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.128241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.128538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.128569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.128768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.128797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.128928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.273 [2024-07-24 18:22:18.128958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.273 qpair failed and we were unable to recover it. 00:27:25.273 [2024-07-24 18:22:18.129209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.129239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.129486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.129546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.129732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.129762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.129965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.129975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.130139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.130169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.130395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.130424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.130724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.130755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.130954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.130984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.131116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.131146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.131353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.131383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.131646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.131677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.131951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.131981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.132190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.132201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.132301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.132310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.132505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.132515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.132723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.132733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.132889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.132919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.133056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.133086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.133197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.133226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.133430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.133460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.133660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.133729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.134031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.134063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.134267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.134298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.134554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.134587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.134785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.134815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.135016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.135046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.135329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.135360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.135486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.135526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.135677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.135707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.135891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.135920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.136037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.136067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.136312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.136328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.136580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.136613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.136816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.136847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.137037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.137047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.137213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.137223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.137321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.137331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.274 [2024-07-24 18:22:18.137522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.274 [2024-07-24 18:22:18.137554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.274 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.137780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.137810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.138097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.138126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.138313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.138343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.138545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.138576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.138809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.138818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.138979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.138989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.139231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.139260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.139453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.139483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.139692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.139722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.139947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.139977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.140241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.140271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.140548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.140578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.140780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.140810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.141012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.141042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.141291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.141304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.141547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.141579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.141783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.141813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.142048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.142057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.142216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.142225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.142370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.142380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.142472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.142482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.142629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.142640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.142828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.142857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.143006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.143035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.143310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.143340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.143529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.143560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.143766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.143775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.143948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.143979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.144109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.144140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.144349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.144379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.144526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.144557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.144699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.144729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.144884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.144894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.145132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.145164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.145424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.145454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.145762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.145772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.145862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.145872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.275 [2024-07-24 18:22:18.145982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.275 [2024-07-24 18:22:18.146012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.275 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.146153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.146183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.146457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.146487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.146758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.146789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.147020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.147050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.147255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.147264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.147403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.147412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.147592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.147623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.147764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.147794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.148044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.148074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.148313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.148322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.148420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.148429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.148577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.148587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.148726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.148736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.148806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.148815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.149022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.149031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.149248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.149278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.149404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.149439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.149698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.149729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.149976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.149986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.150137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.150146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.150311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.150321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.150526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.150547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.150692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.150722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.151000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.151030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.151183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.151213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.151340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.151349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.151569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.151601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.151914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.151944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.152090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.152120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.152252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.152282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.152476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.152514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.152785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.152815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.153046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.153056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.153145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.153155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.153258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.276 [2024-07-24 18:22:18.153268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.276 qpair failed and we were unable to recover it. 00:27:25.276 [2024-07-24 18:22:18.153472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.153482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.153670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.153705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.153796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.153814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.153978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.153993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.154240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.154270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.154405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.154435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.154656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.154700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.154864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.154876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.155072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.155102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.155377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.155407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.155657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.155687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.155961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.155991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.156162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.156172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.156384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.156414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.156614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.156645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.156794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.156824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.156939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.156969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.157188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.157219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.157474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.157514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.157768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.157798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.157988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.158018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.158285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.158303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.158411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.158441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.158589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.158620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.158755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.158785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.159039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.159069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.159193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.159223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.159366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.159396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.159516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.159547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.159681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.159711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.159844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.159874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.159992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.160021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.160137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.160166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.160418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.160428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.277 qpair failed and we were unable to recover it. 00:27:25.277 [2024-07-24 18:22:18.160565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.277 [2024-07-24 18:22:18.160575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.160671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.160681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.160834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.160844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.161049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.161059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.161216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.161226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.161298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.161307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.161424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.161462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.161702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.161720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.161908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.161923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.162089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.162105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.162320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.162335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.162585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.162601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.162792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.162807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.162918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.162934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.163261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.163297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.163414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.163426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.163645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.163655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.163876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.163886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.164045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.164055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.164160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.164170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.164318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.164328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.164539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.164549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.164642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.164652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.164808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.164818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.164901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.164911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.165071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.165081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.165247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.165257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.165408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.165420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.165561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.165571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.165787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.165796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.165884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.165894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.166036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.166046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.166113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.166123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.166290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.166300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.166451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.166461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.166541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.166551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.166643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.166653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.166892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.278 [2024-07-24 18:22:18.166902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.278 qpair failed and we were unable to recover it. 00:27:25.278 [2024-07-24 18:22:18.167040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.167050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.167133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.167143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.167299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.167309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.167453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.167463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.167557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.167567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.167706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.167716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.167871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.167881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.168979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.168990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.169938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.169947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.170106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.170116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.170201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.170211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.170371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.170381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.170546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.170556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.170717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.170727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.170817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.170828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.170925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.170935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.171030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.171200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.171306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.171388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.171495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.171591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.279 [2024-07-24 18:22:18.171701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.279 qpair failed and we were unable to recover it. 00:27:25.279 [2024-07-24 18:22:18.171836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.171846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.171934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.171944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.172050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.172060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.172207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.172217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.172378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.172387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.172498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.172508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.172599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.172609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.172696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.172706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.172861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.172871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.173948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.173958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.174040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.174057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.174200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.174210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.174353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.174363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.174530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.174541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.174683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.174693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.174779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.174789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.174998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.175008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.175089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.175099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.175265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.175275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.175445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.175454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.175635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.175645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.175725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.175735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.175807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.175817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.176034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.176044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.176202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.176213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.176364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.176373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.176514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.176524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.176621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.176631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.176714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.176723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.176865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.176876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.280 [2024-07-24 18:22:18.177024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.280 [2024-07-24 18:22:18.177034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.280 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.177170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.177179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.177321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.177332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.177477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.177487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.177595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.177605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.177690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.177699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.177849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.177880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.178131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.178161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.178308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.178338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.178530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.178562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.178824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.178854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.179057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.179088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.179208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.179237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.179451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.179481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.179785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.179816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.180010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.180040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.180253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.180282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.180435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.180465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.180717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.180755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.180961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.180977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.181207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.181237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.181441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.181477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.181630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.181661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.181961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.181992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.182251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.182281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.182555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.182587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.182772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.182802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.182999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.183029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.183303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.183333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.183479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.183518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.183737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.183767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.184020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.184051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.184301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.281 [2024-07-24 18:22:18.184332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.281 qpair failed and we were unable to recover it. 00:27:25.281 [2024-07-24 18:22:18.184607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.184638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.184768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.184798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.185002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.185033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.185182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.185213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.185475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.185494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.185722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.185737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.185923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.185938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.186139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.186170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.186442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.186472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.186718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.186749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.186938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.186969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.187154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.187184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.187323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.187353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.187554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.187585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.187733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.187763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.187950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.187992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.188174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.188189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.188342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.188373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.188646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.188678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.188891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.188921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.189201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.189231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.189380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.189411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.189598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.189630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.189840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.189870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.190006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.190036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.190235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.190265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.190389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.190404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.190549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.190577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.190719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.190750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.190892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.190922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.191050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.191080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.191314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.191324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.191539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.191549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.191709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.191719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.191798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.191807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.191905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.191914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.192067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.192077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.192287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.282 [2024-07-24 18:22:18.192316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.282 qpair failed and we were unable to recover it. 00:27:25.282 [2024-07-24 18:22:18.192504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.192535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.192743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.192774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.192976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.193006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.193149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.193179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.193312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.193348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.193633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.193664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.193794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.193824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.194020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.194051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.194253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.194282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.194473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.194514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.194663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.194692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.194891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.194920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.195103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.195133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.195332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.195362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.195504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.195534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.195787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.195815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.196090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.196120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.196323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.196332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.196571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.196602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.196827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.196857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.196990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.197019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.197207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.197236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.197481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.197494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.197606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.197616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.197752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.197762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.197871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.197881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.198016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.198045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.198242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.198271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.198402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.198433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.198736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.198766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.198965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.198995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.199224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.199253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.199430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.199440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.199592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.199602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.199720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.199750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.199876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.199906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.200157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.200187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.200287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.283 [2024-07-24 18:22:18.200297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.283 qpair failed and we were unable to recover it. 00:27:25.283 [2024-07-24 18:22:18.200458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.200487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.200632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.200662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.200844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.200873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.201083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.201112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.201262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.201292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.201511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.201542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.201851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.201886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.202036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.202066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.202276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.202285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.202445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.202475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.202675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.202705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.202954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.202984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.203104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.203114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.203272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.203302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.203583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.203615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.203820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.203850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.204046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.204056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.204287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.204296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.204406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.204436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.204717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.204748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.204956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.204986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.205257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.205286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.205426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.205456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.205619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.205650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.205829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.205859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.205992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.206022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.206234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.206264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.206460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.206470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.206600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.206610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.206707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.206717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.206889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.206918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.207171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.207201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.207399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.207429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.207644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.207675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.207868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.207898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.208094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.208103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.208274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.208304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.284 [2024-07-24 18:22:18.208444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.284 [2024-07-24 18:22:18.208473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.284 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.208679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.208710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.208918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.208948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.209137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.209147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.209308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.209338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.209556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.209588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.209861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.209890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.210088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.210098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.210250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.210259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.210414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.210425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.210677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.210688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.210883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.210913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.211111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.211141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.211350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.211379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.211562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.211593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.211791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.211820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.211950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.211980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.212188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.212218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.212433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.212443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.212587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.212597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.212823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.212832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.212932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.212942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.213002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.213022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.213180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.213220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.213352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.213382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.213506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.213538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.213669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.213698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.213812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.213841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.214027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.214055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.214201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.214211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.214432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.214461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.214745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.214776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.214976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.214986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.215070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.215080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.215312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.215342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.215461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.215501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.215702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.215733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.215980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.216010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.285 [2024-07-24 18:22:18.216193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.285 [2024-07-24 18:22:18.216223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.285 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.216418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.216448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.216589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.216619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.216807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.216836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.216963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.216993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.217176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.217206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.217409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.217418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.217604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.217635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.217831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.217861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.218085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.218115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.218382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.218391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.218542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.218579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.218794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.218824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.219068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.219097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.219378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.219387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.219511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.219542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.219686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.219717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.219911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.220213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.220243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.220459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.220489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.220634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.220664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.220859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.220889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.221071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.221113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.221324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.221333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.221576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.221608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.221808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.221838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.222087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.222117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.222252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.222282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.222403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.222432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.222704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.222735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.222927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.222957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.223208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.223238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.286 qpair failed and we were unable to recover it. 00:27:25.286 [2024-07-24 18:22:18.223370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.286 [2024-07-24 18:22:18.223380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.223519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.223529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.223596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.223606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.223756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.223765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.223921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.223930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.224145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.224175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.224401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.224431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.224593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.224624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.224804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.224834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.225031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.225060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.225329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.225338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.225518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.225529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.225733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.225753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.225855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.225864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.225953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.225963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.226062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.226072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.226220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.226229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.226373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.226402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.226672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.226703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.226909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.226945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.227128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.227158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.227277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.227307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.227509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.227540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.227727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.227757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.227874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.227904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.228091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.228100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.228312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.228341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.228566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.228597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.228829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.228860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.229133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.229162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.229359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.229369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.229503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.229513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.229672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.229681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.229846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.229855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.230082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.230111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.230265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.230295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.230513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.230543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.230737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.287 [2024-07-24 18:22:18.230767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.287 qpair failed and we were unable to recover it. 00:27:25.287 [2024-07-24 18:22:18.230957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.230987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.231127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.231157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.231283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.231293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.231512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.231542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.231672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.231702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.231955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.231990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.232154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.232164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.232374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.232404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.232610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.232641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.232785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.232815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.233035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.233065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.233278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.233308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.233524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.233555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.233818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.233848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.234075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.234085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.234254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.234264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.234516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.234547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.234734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.234763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.235057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.235088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.235290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.235320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.235470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.235510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.235638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.235674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.235880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.235916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.236057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.236066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.236245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.236275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.236392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.236422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.236632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.236662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.236919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.236949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.237146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.237177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.237450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.237460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.237544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.237554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.237783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.237793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.237941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.237951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.238020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.238029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.238292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.238322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.238531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.238563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.238820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.238850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.239033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.288 [2024-07-24 18:22:18.239063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.288 qpair failed and we were unable to recover it. 00:27:25.288 [2024-07-24 18:22:18.239266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.239296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.239442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.239451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.239638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.239648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.239754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.239763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.239848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.239858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.240091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.240121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.240297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.240327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.240458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.240489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.240780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.240810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.240999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.241029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.241235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.241266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.241488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.241546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.241757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.241787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.241906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.241936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.242132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.242161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.242286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.242316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.242601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.242611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.242805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.242835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.243046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.243076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.243259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.243269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.243502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.243532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.243799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.243829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.244025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.244055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.244240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.244276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.244528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.244558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.244709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.244740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.244963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.244972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.245163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.245196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.245487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.245547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.245752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.245782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.245928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.245958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.246155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.246165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.246265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.246275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.246495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.246505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.246737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.246747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.246838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.246848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.246987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.246997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.289 [2024-07-24 18:22:18.247140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.289 [2024-07-24 18:22:18.247151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.289 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.247369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.247399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.247599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.247630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.247785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.247815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.247997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.248006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.248243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.248273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.248419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.248448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.248658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.248689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.248963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.248993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.249140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.249170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.249358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.249367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.249531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.249562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.249765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.249795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.250138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.250206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.250451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.250467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.250717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.250733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.250846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.250861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.251025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.251040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.251116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.251131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.251381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.251413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.251700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.251731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.251869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.251899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.252095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.252125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.252312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.252342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.252468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.252478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.252691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.252722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.252992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.253028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.253227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.253257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.253466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.253506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.253641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.253671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.253800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.253829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.253958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.253988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.254245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.290 [2024-07-24 18:22:18.254275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.290 qpair failed and we were unable to recover it. 00:27:25.290 [2024-07-24 18:22:18.254485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.254524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.254783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.254813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.255020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.255050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.255176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.255206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.255483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.255524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.255802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.255831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.255969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.255978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.256067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.256077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.256232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.256241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.256474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.256484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.256552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.256562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.256716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.256726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.256930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.256960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.257227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.257257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.257422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.257432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.257519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.257529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.257673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.257704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.257903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.257933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.258129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.258158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.258370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.258399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.258696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.258765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.258984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.259018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.259152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.259184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.259416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.259447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.259648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.259680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.259829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.259859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.260111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.260142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.260333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.260362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.260483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.260523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.260715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.260745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.261031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.261061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.261274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.261304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.261429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.261459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.261723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.261755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.262036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.262067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.262268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.262298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.262472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.262511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.262719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.262751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.262936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.291 [2024-07-24 18:22:18.262966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.291 qpair failed and we were unable to recover it. 00:27:25.291 [2024-07-24 18:22:18.263108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.263138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.263324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.263354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.263596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.263608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.263771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.263781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.263962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.263991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.264211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.264240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.264364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.264403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.264553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.264563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.264660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.264669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.264751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.264761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.264838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.264847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.265004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.265034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.265172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.265202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.265405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.265435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.265567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.265579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.265676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.265686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.265784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.265794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.265931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.265941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.266101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.266131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.266412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.266442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.266583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.266614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.266818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.266853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.267042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.267071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.267204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.267235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.267488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.267529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.267666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.267696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.267891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.267922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.268106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.268135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.268383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.268413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.268559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.268591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.268733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.268764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.268888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.268918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.269116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.269147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.269418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.269448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.269595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.269627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.269774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.269805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.270015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.270046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.270187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.270197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.270363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.270393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.270590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.270621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.270740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.292 [2024-07-24 18:22:18.270770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.292 qpair failed and we were unable to recover it. 00:27:25.292 [2024-07-24 18:22:18.270953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.270983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.271239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.271249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.271460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.271470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.271573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.271583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.271801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.271811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.272070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.272101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.272287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.272317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.272512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.272544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.272774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.272804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.273052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.273083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.273242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.273251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.273443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.273473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.273642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.273673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.273882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.273913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.274030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.274060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.274256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.274286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.274768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.274783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.274986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.274997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.275207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.275217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.275345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.275355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.275445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.275456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.275614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.275625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.275721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.275732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.275826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.275835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.276043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.276053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.276202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.276212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.276350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.276361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.276453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.276465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.276626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.276637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.276740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.276750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.276890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.276901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.277971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.277981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.278073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.293 [2024-07-24 18:22:18.278083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.293 qpair failed and we were unable to recover it. 00:27:25.293 [2024-07-24 18:22:18.278172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.278983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.278992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.279954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.279984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.280101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.280132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.280324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.280354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.280537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.280574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.280724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.280754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.280946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.280975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.281194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.281224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.281363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.281395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.281542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.281553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.281690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.281701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.281775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.281784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.281855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.281864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.281953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.281963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.282025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.282034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.282220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.282250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.282445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.282475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.282631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.282663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.282887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.282918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.283070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.283100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.283276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.283286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.283443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.283473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.283755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.283785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.283983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.284013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.284152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.284182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.284362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.284392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.284521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.284552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.294 [2024-07-24 18:22:18.284742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.294 [2024-07-24 18:22:18.284772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.294 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.285033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.285063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.285258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.285288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.285412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.285445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.285615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.285626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.285692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.285706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.285846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.285855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.286033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.286063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.286179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.286209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.286348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.286378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.286631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.286662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.286846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.286876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.287016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.287046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.287179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.287209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.287390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.287420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.287547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.287578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.287764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.287795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.287932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.287963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.288100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.288130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.288318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.288347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.288573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.288604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.288803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.288832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.289031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.289061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.289357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.289386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.289596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.289627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.295 [2024-07-24 18:22:18.289880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.295 [2024-07-24 18:22:18.289911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.295 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.290102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.290132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.290284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.290314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.290564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.290595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.290832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.290862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.291042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.291072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.291206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.291236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.291438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.291447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.291590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.291600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.291672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.291682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.291909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.291940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.292124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.292153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.292351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.292381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.292571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.292582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.292738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.292769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.292906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.292936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.293184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.293214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.293403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.293412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.293566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.293597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.293795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.293829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.294034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.294064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.294190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.294200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.294355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.294383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.294583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.294614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.294799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.294829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.295030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.295060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.295312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.295342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.295488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.295527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.295657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.295687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.295954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.295985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.296132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.296162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.296298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.296327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.296466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.296476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.296621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.296659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.296796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.296826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.296951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.296981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.297180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.297210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.296 qpair failed and we were unable to recover it. 00:27:25.296 [2024-07-24 18:22:18.297330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.296 [2024-07-24 18:22:18.297340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.297431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.297441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.297551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.297582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.297766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.297797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.298002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.298032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.298158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.298168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.298257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.298267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.298424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.298433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.298713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.298744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.298968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.298998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.299181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.299190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.299274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.299284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.299533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.299543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.299710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.299719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.299877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.299887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.300040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.300050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.300222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.300232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.300368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.300377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.300525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.300535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.300675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.300685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.300854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.300884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.301016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.301046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.301302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.301338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.301596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.301606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.301744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.301754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.301839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.301849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.302067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.302077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.302341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.302350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.302513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.302523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.302608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.302618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.302704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.302713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.302908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.302938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.303134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.303164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.303303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.303332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.303522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.303553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.297 [2024-07-24 18:22:18.303769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.297 [2024-07-24 18:22:18.303800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.297 qpair failed and we were unable to recover it. 00:27:25.298 [2024-07-24 18:22:18.303989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.298 [2024-07-24 18:22:18.303999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.298 qpair failed and we were unable to recover it. 00:27:25.298 [2024-07-24 18:22:18.304154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.298 [2024-07-24 18:22:18.304163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.298 qpair failed and we were unable to recover it. 00:27:25.298 [2024-07-24 18:22:18.304335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.298 [2024-07-24 18:22:18.304344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.298 qpair failed and we were unable to recover it. 00:27:25.580 [2024-07-24 18:22:18.304444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.580 [2024-07-24 18:22:18.304454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.580 qpair failed and we were unable to recover it. 00:27:25.580 [2024-07-24 18:22:18.304595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.580 [2024-07-24 18:22:18.304605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.580 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.304747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.304757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.304848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.304859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.305907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.305916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.306153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.306248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.306345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.306456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.306609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.306757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.306849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.306992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.307002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.307258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.307268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.307420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.307430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.307585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.307595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.307750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.307761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.307968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.307977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.308127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.308137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.308288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.308298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.308473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.308483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.308664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.308674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.308814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.308823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.308907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.308916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.581 [2024-07-24 18:22:18.309864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.581 qpair failed and we were unable to recover it. 00:27:25.581 [2024-07-24 18:22:18.309951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.309961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.310124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.310154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.310379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.310409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.310520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.310551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.310776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.310807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.311103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.311133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.311250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.311281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.311395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.311404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.311583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.311613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.311742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.311772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.311997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.312026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.312302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.312333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.312530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.312561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.312749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.312778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.313057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.313087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.313216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.313246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.313447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.313477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.313633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.313664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.313846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.313876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.314056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.314087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.314271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.314301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.314429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.314471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.314674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.314685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.314788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.314797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.314938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.314950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.315166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.315196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.315470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.315514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.315722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.315753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.315953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.315983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.316107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.316137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.316341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.316371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.316553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.316584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.316860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.316891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.317025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.317055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.317333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.317363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.317568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.317578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.317744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.582 [2024-07-24 18:22:18.317774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.582 qpair failed and we were unable to recover it. 00:27:25.582 [2024-07-24 18:22:18.317993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.318022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.318249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.318259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.318527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.318557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.318702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.318732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.318979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.319009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.319204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.319233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.319435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.319444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.319601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.319632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.319739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.319769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.319889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.319920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.320055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.320085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.320207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.320217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.320352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.320362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.320437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.320447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.320544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.320555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.320714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.320744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.320996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.321026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.321214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.321244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.321358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.321388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.321637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.321667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.321814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.321845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.322145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.322175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.322416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.322425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.322655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.322665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.322765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.322775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.323026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.323055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.323309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.323339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.323531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.323542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.323708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.323738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.323854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.323884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.324029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.324059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.324330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.324360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.324509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.324519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.324763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.324793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.324988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.325017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.325304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.325334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.325519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.325549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.325695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.583 [2024-07-24 18:22:18.325725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.583 qpair failed and we were unable to recover it. 00:27:25.583 [2024-07-24 18:22:18.325838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.325867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.325991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.326022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.326211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.326241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.326532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.326563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.326814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.326844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.327045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.327075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.327337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.327367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.327507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.327538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.327770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.327780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.327970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.327999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.328244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.328274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.328517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.328527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.328708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.328718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.328875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.328905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.329155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.329185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.329335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.329365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.329553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.329563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.329644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.329653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.329814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.329824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.330031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.330061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.330258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.330288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.330407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.330437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.330648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.330658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.330750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.330760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.330839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.330849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.330931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.330941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.331090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.331121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.331347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.331376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.331580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.331611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.331910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.331945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.332142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.332171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.332446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.332475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.332773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.332803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.332934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.332964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.333144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.333175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.333425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.333454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.333648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.333679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.333849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.584 [2024-07-24 18:22:18.333859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.584 qpair failed and we were unable to recover it. 00:27:25.584 [2024-07-24 18:22:18.334109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.334119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.334337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.334367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.334582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.334613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.334866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.334896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.335156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.335186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.335383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.335413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.335673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.335683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.335936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.335946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.336108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.336117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.336218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.336228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.336401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.336410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.336624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.336654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.336930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.336960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.337147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.337177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.337313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.337343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.337534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.337565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.337683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.337693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.337844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.337854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.338075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.338085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.338296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.338326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.338454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.338484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.338640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.338671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.338810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.338840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.338973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.339002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.339129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.339158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.339343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.339373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.339510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.339541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.339668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.339678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.339893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.585 [2024-07-24 18:22:18.339923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.585 qpair failed and we were unable to recover it. 00:27:25.585 [2024-07-24 18:22:18.340173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.340203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.340483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.340524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.340668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.340703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.340902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.340932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.341202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.341231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.341380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.341410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.341657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.341667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.341771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.341781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.341887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.341897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.341972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.341981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.342070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.342079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.342215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.342224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.342381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.342391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.342556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.342587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.342731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.342761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.342894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.342924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.343125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.343155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.343357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.343387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.343528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.343559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.343759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.343789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.343978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.344008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.344142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.344172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.344373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.344404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.344606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.344637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.344775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.344805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.344986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.345016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.345244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.345277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.345406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.345416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.345597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.345607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.345761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.345771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.345916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.345946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.346249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.346279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.346522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.346532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.346751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.346780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.346980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.347010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.347260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.347290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.347476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.347516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.586 [2024-07-24 18:22:18.347706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.586 [2024-07-24 18:22:18.347735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.586 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.348013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.348043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.348315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.348345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.348528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.348559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.348700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.348710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.348785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.348797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.348951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.348981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.349230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.349260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.349405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.349435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.349538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.349548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.349627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.349636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.349790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.349800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.350007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.350017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.350118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.350128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.350305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.350335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.350539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.350570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.350771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.350801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.350986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.351016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.351163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.351192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.351333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.351362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.351480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.351522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.351774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.351804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.352061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.352091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.352212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.352242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.352437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.352467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.352746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.352756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.352902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.352912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.353072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.353101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.353227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.353256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.353488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.353529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.353722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.353732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.353896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.353926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.354061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.354092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.354219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.354249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.354432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.354463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.354656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.354667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.354811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.354821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.355002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.355013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.587 [2024-07-24 18:22:18.355154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.587 [2024-07-24 18:22:18.355164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.587 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.355253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.355283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.355468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.355526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.355779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.355809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.356087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.356117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.356307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.356344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.356445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.356455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.356634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.356670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.356876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.356906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.357103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.357133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.357280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.357310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.357429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.357459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.357738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.357769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.357949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.357980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.358258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.358288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.358437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.358466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.358677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.358708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.358932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.358962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.359109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.359139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.359327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.359357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.359567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.359577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.359793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.359823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.359973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.360004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.360262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.360292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.360607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.360617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.360754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.360784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.360983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.361013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.361264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.361294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.361517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.361548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.361670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.361700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.361898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.361928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.362179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.362209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.362337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.362347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.362474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.362484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.362652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.362663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.362830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.362840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.362934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.362944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.363049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.363058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.588 qpair failed and we were unable to recover it. 00:27:25.588 [2024-07-24 18:22:18.363203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.588 [2024-07-24 18:22:18.363233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.363413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.363443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.363669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.363700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.363873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.363883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.363970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.363980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.364176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.364205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.364339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.364348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.364498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.364525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.364710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.364739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.364940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.364975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.365100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.365129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.365263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.365296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.365478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.365488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.365703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.365734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.365990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.366019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.366159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.366189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.366386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.366395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.366620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.366630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.366769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.366779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.366931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.366941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.367188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.367218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.367348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.367378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.367579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.367609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.367835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.367865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.368066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.368095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.368290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.368319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.368509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.368539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.368755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.368785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.368990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.369021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.369196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.369226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.369444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.369454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.369601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.369611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.369770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.369780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.369942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.369951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.370113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.370143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.370272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.370302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.370439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.370470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.370660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.589 [2024-07-24 18:22:18.370670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.589 qpair failed and we were unable to recover it. 00:27:25.589 [2024-07-24 18:22:18.370775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.370785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.370926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.370955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.371153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.371183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.371364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.371394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.371579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.371589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.371676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.371702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.371888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.371918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.372054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.372083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.372351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.372361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.372510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.372520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.372620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.372630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.372783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.372793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.373002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.373031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.373226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.373256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.373533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.373563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.373763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.373792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.373986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.374015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.374214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.374244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.374412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.374422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.374520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.374530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.374618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.374628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.374766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.374775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.374977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.374987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.375159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.375169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.375342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.375378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.375571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.375602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.375808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.375838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.376028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.376057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.376339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.376369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.376613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.376623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.376797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.376806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.590 [2024-07-24 18:22:18.376960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.590 [2024-07-24 18:22:18.376969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.590 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.377134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.377164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.377365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.377395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.377599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.377631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.377757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.377767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.377861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.377871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.377961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.377971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.378111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.378121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.378244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.378273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.378468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.378506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.378642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.378685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.378891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.378901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.379034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.379044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.379217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.379226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.379443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.379453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.379661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.379671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.379942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.379973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.380107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.380137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.380310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.380340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.380453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.380483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.380699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.380734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.380920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.380930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.381084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.381113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.381328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.381358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.381582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.381613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.381733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.381742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.381964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.381973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.382214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.382244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.382429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.382459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.382606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.382638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.382775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.382784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.382934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.382944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.383147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.383157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.383305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.383315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.383500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.383511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.383616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.383625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.383853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.383863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.383954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.383963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.591 qpair failed and we were unable to recover it. 00:27:25.591 [2024-07-24 18:22:18.384036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.591 [2024-07-24 18:22:18.384045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.384185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.384215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.384419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.384448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.384710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.384742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.384878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.384908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.385170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.385199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.385446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.385476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.385688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.385718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.385896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.385906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.386149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.386184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.386435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.386444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.386584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.386594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.386753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.386762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.386917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.386927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.387107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.387117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.387326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.387355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.387602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.387634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.387860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.387890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.388018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.388047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.388246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.388275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.388457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.388487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.388694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.388724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.388917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.388927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.389081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.389111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.389304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.389334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.389466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.389504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.389709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.389739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.389990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.390020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.390212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.390242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.390488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.390502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.390672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.390682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.390852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.390882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.391011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.391040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.391291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.391321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.391508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.391534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.391627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.391637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.391733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.391763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.592 [2024-07-24 18:22:18.391958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.592 [2024-07-24 18:22:18.391988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.592 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.392186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.392216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.392341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.392362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.392540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.392550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.392628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.392638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.392803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.392813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.392994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.393024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.393157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.393187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.393368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.393398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.393529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.393560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.393679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.393709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.393902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.393932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.394122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.394158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.394279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.394308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.394484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.394498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.394674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.394684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.394753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.394762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.394934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.394944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.395046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.395056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.395153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.395163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.395243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.395252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.395407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.395416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.395563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.395595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.395874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.395905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.396178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.396207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.396368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.396398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.396610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.396622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.396803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.396832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.397053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.397083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.397334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.397364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.397465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.397475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.397641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.397651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.397756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.397765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.397912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.397921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.398186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.398216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.398375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.398404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.398600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.398610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.398781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.398811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.593 qpair failed and we were unable to recover it. 00:27:25.593 [2024-07-24 18:22:18.399088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.593 [2024-07-24 18:22:18.399118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.399365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.399395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.399540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.399572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.399758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.399788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.400024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.400054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.400247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.400277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.400458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.400488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.400664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.400673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.400881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.400890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.400969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.400978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.401081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.401111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.401264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.401294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.401432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.401462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.401743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.401775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.402064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.402099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.402280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.402310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.402520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.402551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.402677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.402707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.402997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.403027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.403317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.403347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.403539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.403571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.403691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.403721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.403973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.403983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.404089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.404099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.404322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.404351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.404478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.404518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.404717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.404747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.404903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.404933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.405216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.405246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.405430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.405459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.405733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.405764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.405950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.405960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.594 [2024-07-24 18:22:18.406112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.594 [2024-07-24 18:22:18.406141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.594 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.406272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.406302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.406523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.406554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.406797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.406806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.406974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.406984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.407127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.407157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.407302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.407331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.407604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.407635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.407750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.407781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.407989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.408018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.408150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.408179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.408476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.408515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.408741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.408771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.408999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.409029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.409234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.409264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.409458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.409487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.409689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.409699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.409793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.409802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.409954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.409964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.410043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.410052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.410209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.410218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.410294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.410304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.410461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.410473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.410635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.410646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.410751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.410792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.410931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.410961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.411157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.411187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.411370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.411400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.411541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.411573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.411702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.411732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.411860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.411890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.412016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.412046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.412294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.412323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.412503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.412513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.412667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.412711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.412825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.412855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.413109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.413140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.413394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.413423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.595 qpair failed and we were unable to recover it. 00:27:25.595 [2024-07-24 18:22:18.413625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.595 [2024-07-24 18:22:18.413656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.413797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.413827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.414014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.414043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.414296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.414327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.414465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.414474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.414631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.414641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.414821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.414831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.414995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.415024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.415224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.415254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.415454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.415483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.415673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.415683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.415836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.415862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.416001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.416030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.416234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.416263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.416462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.416501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.416757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.416787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.416988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.417018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.417148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.417178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.417467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.417506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.417633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.417663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.417938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.417968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.418165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.418194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.418344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.418373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.418599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.418630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.418825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.418838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.419006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.419016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.419172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.419181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.419326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.419335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.419561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.419592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.419846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.419876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.420029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.420059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.420284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.420313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.420442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.420481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.420567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.420583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.420751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.420760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.420917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.420927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.421020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.421029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.421254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.421264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.596 qpair failed and we were unable to recover it. 00:27:25.596 [2024-07-24 18:22:18.421430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.596 [2024-07-24 18:22:18.421440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.421612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.421643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.421766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.421797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.422052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.422082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.422266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.422296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.422510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.422541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.422745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.422774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.423025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.423034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.423170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.423200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.423323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.423353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.423525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.423535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.423772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.423802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.423999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.424029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.424234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.424264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.424446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.424475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.424618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.424649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.424781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.424806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.424975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.424985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.425161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.425170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.425345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.425375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.425519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.425550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.425676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.425686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.425862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.425892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.426097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.426126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.426340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.426370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.426574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.426605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.426857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.426900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.426982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.426992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.427206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.427235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.427428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.427457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.427592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.427623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.427866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.427876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.428025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.428035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.428249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.428279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.428573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.428605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.428804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.428834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.429038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.429067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.429350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.597 [2024-07-24 18:22:18.429380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.597 qpair failed and we were unable to recover it. 00:27:25.597 [2024-07-24 18:22:18.429573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.429612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.429749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.429759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.429979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.430009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.430208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.430238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.430454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.430484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.430792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.430822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.431078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.431108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.431384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.431414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.431610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.431641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.431843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.431873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.432079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.432108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.432233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.432263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.432535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.432565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.432750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.432780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.432985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.433126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.433288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.433419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.433505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.433656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.433818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.433958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.433988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.434262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.434292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.434424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.434453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.434606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.434638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.434752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.434762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.434905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.434914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.435146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.435156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.435236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.435247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.435404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.435433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.435628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.435659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.435803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.435833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.436033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.436063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.436244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.436274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.436387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.436416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.436682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.436692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.436773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.436803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.437034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.598 [2024-07-24 18:22:18.437064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.598 qpair failed and we were unable to recover it. 00:27:25.598 [2024-07-24 18:22:18.437332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.437362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.437546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.437577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.437853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.437883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.438080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.438110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.438372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.438410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.438551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.438561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.438645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.438655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.438927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.438956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.439159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.439189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.439380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.439410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.439662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.439692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.439869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.439879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.439998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.440028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.440281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.440311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.440449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.440479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.440762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.440792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.441043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.441072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.441205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.441235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.441378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.441407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.441603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.441634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.441815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.441845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.441985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.442015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.442151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.442180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.442381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.442411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.442528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.442538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.442679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.442689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.442878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.442908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.443105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.443135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.443332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.443362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.443515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.443524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.443627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.443639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.599 [2024-07-24 18:22:18.443738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.599 [2024-07-24 18:22:18.443768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.599 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.443896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.443926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.444119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.444148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.444333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.444363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.444477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.444515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.444716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.444746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.445009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.445038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.445221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.445254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.445382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.445412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.445611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.445643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.445917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.445947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.446097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.446127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.446399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.446429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.446652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.446662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.446764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.446784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.446868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.446878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.447959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.447991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.448286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.448315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.448511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.448543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.448767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.448797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.449043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.449073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.449351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.449381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.449651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.449661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.449815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.449825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.449896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.449905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.450058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.450067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.450148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.450158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.450205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebff0 (9): Bad file descriptor 00:27:25.600 [2024-07-24 18:22:18.450541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.450611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.450911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.450943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.451094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.451124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.451278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.600 [2024-07-24 18:22:18.451308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.600 qpair failed and we were unable to recover it. 00:27:25.600 [2024-07-24 18:22:18.451517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.451550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.451813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.451843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.452091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.452121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.452313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.452343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.452486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.452507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.452765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.452795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.453062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.453092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.453292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.453322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.453539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.453555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.453659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.453674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.453853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.453883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.454103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.454133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.454254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.454284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.454479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.454499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.454662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.454699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.454901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.454931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.455060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.455090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.455353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.455383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.455534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.455566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.455822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.455851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.456106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.456120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.456346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.456361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.456545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.456557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.456706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.456737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.456988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.457017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.457266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.457296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.457437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.457468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.457751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.457761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.457995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.458025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.458242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.458272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.458480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.458519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.458658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.458668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.458762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.458771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.458928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.458957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.459156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.459185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.459316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.459346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.459541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.459572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.601 [2024-07-24 18:22:18.459713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.601 [2024-07-24 18:22:18.459743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.601 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.459952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.459982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.460178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.460207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.460392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.460422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.460605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.460615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.460803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.460833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.461079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.461109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.461307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.461336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.461588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.461620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.461806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.461836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.462031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.462061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.462251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.462281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.462464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.462504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.462751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.462761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.462913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.462923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.463078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.463088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.463275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.463305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.463487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.463534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.463680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.463709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.463915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.463945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.464141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.464171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.464426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.464456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.464656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.464687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.464891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.464921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.465055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.465085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.465275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.465304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.465449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.465458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.465710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.465720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.465901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.465911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.466143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.466172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.466365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.466395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.466598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.466608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.466754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.466764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.466949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.466979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.467246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.467276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.467546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.467578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.467858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.467888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.468184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.468214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.602 [2024-07-24 18:22:18.468403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.602 [2024-07-24 18:22:18.468432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.602 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.468635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.468666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.468871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.468901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.469102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.469132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.469314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.469343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.469528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.469559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.469821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.469851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.470121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.470151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.470332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.470363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.470568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.470598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.470819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.470849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.471053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.471082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.471289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.471319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.471517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.471548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.471760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.471790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.471989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.472019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.472292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.472321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.472515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.472545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.472821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.472851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.473073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.473108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.473288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.473317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.473524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.473555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.473745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.473754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.473824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.473851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.473994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.474024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.474142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.474172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.474368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.474398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.474683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.474714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.474842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.474852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.475033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.475063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.475264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.475294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.475509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.475540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.475727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.475737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.475840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.475870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.476091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.476121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.476302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.476332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.476476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.476515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.476724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.476754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.603 [2024-07-24 18:22:18.476964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.603 [2024-07-24 18:22:18.476993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.603 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.477203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.477233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.477374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.477404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.477614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.477645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.477892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.477902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.478112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.478142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.478263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.478293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.478510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.478541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.478661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.478691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.478868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.478878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.479033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.479062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.479205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.479235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.479475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.479512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.479710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.479740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.479892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.479921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.480121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.480150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.480334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.480364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.480546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.480577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.480803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.480833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.481015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.481044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.481295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.481325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.481536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.481722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.481752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.481956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.481986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.482237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.482268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.482460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.482499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.482682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.482712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.482921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.482951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.483138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.483148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.483304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.483333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.483533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.483564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.483747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.483777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.483940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.604 [2024-07-24 18:22:18.483950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.604 qpair failed and we were unable to recover it. 00:27:25.604 [2024-07-24 18:22:18.484074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.484104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.484284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.484313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.484567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.484598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.484869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.484878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.485016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.485026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.485084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.485094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.485250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.485259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.485472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.485522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.485780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.485810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.485935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.485965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.486166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.486197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.486335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.486365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.486642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.486674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.486818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.486848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.486965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.486995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.487176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.487186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.487335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.487345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.487428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.487437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.487507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.487517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.487675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.487685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.487802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.487832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.487972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.488002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.488197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.488227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.488407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.488438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.488645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.488655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.488876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.488906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.489157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.489187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.489448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.489478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.489747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.489783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.489932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.489968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.490057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.490067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.490240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.490269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.490468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.490508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.490787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.490831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.490986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.490996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.491146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.491155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.491291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.491300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.605 [2024-07-24 18:22:18.491515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.605 [2024-07-24 18:22:18.491547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.605 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.491675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.491705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.491891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.491921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.492118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.492147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.492290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.492320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.492522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.492553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.492756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.492766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.492895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.492905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.492991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.493000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.493166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.493195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.493390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.493419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.493605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.493615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.493715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.493724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.493929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.493938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.494178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.494187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.494410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.494440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.494641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.494651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.494823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.494853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.494997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.495027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.495159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.495188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.495381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.495411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.495609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.495640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.495864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.495894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.496036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.496065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.496275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.496304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.496521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.496551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.496680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.496709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.496904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.496933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.497183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.497213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.497350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.497379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.497604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.497635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.497887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.497922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.498120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.498149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.498288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.498318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.498597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.498627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.498826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.498856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.498995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.499025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.499163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.606 [2024-07-24 18:22:18.499192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.606 qpair failed and we were unable to recover it. 00:27:25.606 [2024-07-24 18:22:18.499456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.499486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.499663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.499693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.499823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.499852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.500114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.500144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.500286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.500315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.500511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.500541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.500685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.500695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.500898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.500928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.501128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.501157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.501338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.501368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.501558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.501588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.501727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.501757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.502057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.502087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.502283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.502313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.502520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.502551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.502757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.502786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.502983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.503013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.503210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.503240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.503418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.503447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.503639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.503670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.503888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.503918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.504139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.504168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.504438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.504468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.504675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.504706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.504926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.504956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.505169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.505179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.505341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.505371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.505525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.505556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.505757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.505767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.505916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.505925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.505999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.506008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.506147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.506157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.506327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.506337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.506499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.506536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.506718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.506748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.506960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.506990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.507179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.507209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.507434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.607 [2024-07-24 18:22:18.507463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.607 qpair failed and we were unable to recover it. 00:27:25.607 [2024-07-24 18:22:18.507656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.507687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.507965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.507995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.508194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.508204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.508382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.508392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.508560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.508590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.508777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.508807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.509005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.509035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.509264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.509274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.509442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.509462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.509602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.509613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.509691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.509701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.509906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.509916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.510141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.510171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.510390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.510419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.510633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.510664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.510920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.510950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.511221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.511251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.511449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.511479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.511708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.511739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.512013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.512043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.512178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.512188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.512261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.512271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.512426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.512436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.512595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.512605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.512758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.512768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.512939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.512949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.513065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.513094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.513254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.513283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.513486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.513529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.513791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.513801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.513973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.513983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.514144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.608 [2024-07-24 18:22:18.514173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.608 qpair failed and we were unable to recover it. 00:27:25.608 [2024-07-24 18:22:18.514355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.514385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.514588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.514620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.514835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.514846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.515032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.515067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.515198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.515227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.515440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.515470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.515704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.515734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.515919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.515950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.516095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.516126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.516327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.516357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.516557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.516588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.516821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.516832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.516995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.517005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.517192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.517222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.517341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.517371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.517517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.517548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.517762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.517794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.517932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.517961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.518162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.518191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.518393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.518423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.518644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.518675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.518879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.518910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.519127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.519157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.519282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.519313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.519467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.519505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.519650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.519679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.519967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.519996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.520177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.520187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.520324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.520333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.520433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.520442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.520530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.520540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.520681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.520691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.520792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.520821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.521095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.521124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.521351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.521381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.521568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.521600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.521820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.521830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.609 [2024-07-24 18:22:18.521917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.609 [2024-07-24 18:22:18.521927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.609 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.522119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.522149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.522336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.522366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.522620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.522652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.522873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.522883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.522987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.522997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.523092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.523104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.523352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.523381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.523579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.523610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.523730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.523756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.523962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.523972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.524061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.524071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.524165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.524175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.524335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.524345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.524532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.524563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.524753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.524783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.524914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.524944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.525156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.525186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.525392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.525421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.525608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.525639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.525825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.525855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.526147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.526176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.526372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.526402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.526625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.526656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.526802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.526812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.526981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.526991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.527082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.527112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.527248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.527277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.527508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.527539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.527669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.527699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.527821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.527851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.527981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.528011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.528149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.528179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.528349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.528419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.528596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.528631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.528784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.528799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.528878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.528889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.529041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.529051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.610 qpair failed and we were unable to recover it. 00:27:25.610 [2024-07-24 18:22:18.529237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.610 [2024-07-24 18:22:18.529266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.529400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.529429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.529557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.529588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.529711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.529741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.529868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.529898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.530174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.530204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.530476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.530518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.530705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.530735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.530920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.530954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.531104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.531134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.531337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.531367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.531505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.531536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.531791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.531821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.532020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.532048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.532197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.532227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.532344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.532374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.532562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.532593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.532798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.532828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.532980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.532990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.533128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.533138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.533223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.533233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.533449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.533479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.533635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.533666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.533795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.533825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.534006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.534036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.534303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.534332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.534437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.534467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.534670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.534701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.534952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.534983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.535166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.535195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.535391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.535422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.535563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.535594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.535790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.535820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.535953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.535984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.536102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.536134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.536319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.536349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.536546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.536578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.536855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.536884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.611 qpair failed and we were unable to recover it. 00:27:25.611 [2024-07-24 18:22:18.537012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.611 [2024-07-24 18:22:18.537022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.537259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.537288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.537418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.537447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.537681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.537712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.537869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.537879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.538094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.538125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.538318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.538348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.538481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.538522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.538716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.538746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.538889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.538919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.539117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.539147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.539283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.539312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.539441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.539471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.539727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.539757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.539960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.539989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.540180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.540209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.540344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.540374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.540624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.540674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.540870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.540900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.541098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.541128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.541265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.541295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.541484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.541523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.541652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.541682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.541875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.541905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.542173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.542183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.542324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.542334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.542528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.542559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.542735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.542765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.542968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.542998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.543120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.543130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.543366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.543397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.543541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.543571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.543775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.543805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.543944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.543954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.544058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.544068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.544347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.544377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.544588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.544621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.544826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.612 [2024-07-24 18:22:18.544862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.612 qpair failed and we were unable to recover it. 00:27:25.612 [2024-07-24 18:22:18.545038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.545048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.545122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.545133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.545329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.545362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.545549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.545580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.545767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.545777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.545926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.545956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.546236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.546266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.546446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.546476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.546620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.546651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.546856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.546886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.547060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.547090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.547226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.547255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.547383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.547413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.547632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.547663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.547921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.547951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.548135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.548165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.548433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.548463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.548661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.548692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.548810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.548840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.548992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.549023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.549225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.549254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.549463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.549504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.549731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.549762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.550040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.550082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.550270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.550279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.550357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.550368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.550510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.550520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.550602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.550612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.550753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.550764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.550945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.550955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.551041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.551051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.613 [2024-07-24 18:22:18.551292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.613 [2024-07-24 18:22:18.551322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.613 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.551593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.551625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.551738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.551768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.551978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.551988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.552165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.552195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.552398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.552428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.552651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.552869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.552884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.552942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.552954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.553061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.553090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.553286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.553316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.553568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.553599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.553825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.553854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.554057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.554088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.554199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.554220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.554477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.554487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.554720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.554730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.554893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.554902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.555113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.555142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.555286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.555316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.555513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.555544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.555691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.555722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.555918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.555948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.556217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.556247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.556507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.556537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.556663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.556693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.556831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.556860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.557107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.557137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.557378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.557388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.557547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.557557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.557692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.557702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.557862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.557892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.558025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.558054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.558180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.558210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.558407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.558437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.558745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.558776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.558907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.558936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.559128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.614 [2024-07-24 18:22:18.559158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.614 qpair failed and we were unable to recover it. 00:27:25.614 [2024-07-24 18:22:18.559352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.559381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.559567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.559599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.559721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.559751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.559950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.559980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.560171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.560201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.560453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.560482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.560676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.560706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.560890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.560920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.561129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.561139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.561292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.561321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.561477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.561521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.561652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.561682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.561866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.561895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.562973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.562983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.563173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.563203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.563401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.563431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.563624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.563654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.563776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.563801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.563962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.563973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.564135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.564165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.564362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.564391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.564584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.564614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.564877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.564906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.565202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.565211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.565358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.565367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.565435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.565444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.565628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.565638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.565846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.565875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.565977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.566007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.566226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.566255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.566398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.615 [2024-07-24 18:22:18.566428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.615 qpair failed and we were unable to recover it. 00:27:25.615 [2024-07-24 18:22:18.566564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.566595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.566843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.566873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.567012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.567042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.567182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.567212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.567394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.567423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.567618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.567648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.567857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.567887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.568097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.568127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.568324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.568354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.568633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.568664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.568916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.568926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.569987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.569997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.570084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.570095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.570231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.570240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.570312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.570322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.570476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.570485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.570651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.570682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.570884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.570914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.571053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.571083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.571296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.571306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.571466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.571507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.571689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.571719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.571994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.572024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.572152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.572182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.572297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.572326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.572595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.572627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.572831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.572861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.573067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.573076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.573238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.573268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.573549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.573579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.573765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.616 [2024-07-24 18:22:18.573795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.616 qpair failed and we were unable to recover it. 00:27:25.616 [2024-07-24 18:22:18.573894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.573904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.574006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.574015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.574227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.574256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.574402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.574432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.574634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.574666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.574941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.574971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.575103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.575133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.575268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.575298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.575523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.575554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.575679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.575709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.575882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.575892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.575983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.575993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.576198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.576208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.576352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.576362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.576571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.576608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.576808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.576818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.576969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.576979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.577078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.577088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.577238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.577248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.577386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.577396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.577462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.577472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.577637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.577648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.577730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.577739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.577955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.577985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.578198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.578228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.578414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.578443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.578559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.578590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.578865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.578896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.579095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.579134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.579211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.579221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.579301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.579310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.579450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.579480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.579708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.617 [2024-07-24 18:22:18.579738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.617 qpair failed and we were unable to recover it. 00:27:25.617 [2024-07-24 18:22:18.579926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.579956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.580200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.580209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.580379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.580388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.580554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.580564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.580809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.580839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.581092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.581122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.581371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.581401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.581621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.581651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.581832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.581862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.581991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.582000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.582155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.582165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.582427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.582457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.582608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.582640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.582916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.582946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.583077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.583087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.583334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.583364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.583565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.583596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.583725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.583755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.583952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.583961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.584179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.584209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.584330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.584360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.584508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.584545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.584714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.584751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.584948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.584977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.585166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.585196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.585381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.585411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.585543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.585574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.585758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.585787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.585899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.585929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.586131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.586161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.586361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.586391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.586520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.586552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.586761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.586791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.586991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.587021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.618 [2024-07-24 18:22:18.587202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.618 [2024-07-24 18:22:18.587231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.618 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.587432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.587463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.587744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.587813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.588051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.588085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.588280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.588296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.588407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.588437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.588654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.588686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.588943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.588973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.589132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.589163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.589459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.589502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.589672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.589703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.590112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.590128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.590279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.590294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.590536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.590552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.590771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.590791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.591036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.591051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.591214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.591229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.591430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.591441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.591577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.591587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.591842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.591872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.591970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.591980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.592194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.592224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.592423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.592453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.592657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.592689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.592929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.592938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.593040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.593049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.593227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.593257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.593450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.593480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.593720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.593751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.593964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.593973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.594061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.594103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.594252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.594281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.594422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.594452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.594737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.594769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.594892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.594921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.595059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.595088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.595368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.595398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.595617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.595649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.619 [2024-07-24 18:22:18.595774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.619 [2024-07-24 18:22:18.595784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.619 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.595938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.595948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.596155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.596185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.596374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.596404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.596595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.596625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.596754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.596784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.596983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.597013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.597212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.597242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.597500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.597531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.597720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.597750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.597940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.597950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.598163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.598193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.598392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.598421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.598687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.598718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.598898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.598928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.599123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.599152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.599288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.599299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.599387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.599396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.599473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.599482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.599686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.599754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.599998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.600030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.600187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.600230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.600318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.600332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.600423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.600438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.600601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.600616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.600795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.600828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.601041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.601071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.601251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.601287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.601428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.601438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.601628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.601659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.601799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.601830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.602047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.602078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.602283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.602293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.602435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.602464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.602729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.602762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.602949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.602979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.603129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.603158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.603410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.603440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.620 qpair failed and we were unable to recover it. 00:27:25.620 [2024-07-24 18:22:18.603634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.620 [2024-07-24 18:22:18.603666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.603846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.603876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.604067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.604081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.604204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.604234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.604416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.604446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.604666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.604697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.604890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.604920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.605112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.605141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.605411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.605441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.605671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.605702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.605916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.605946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.606089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.606118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.606372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.606402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.606605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.606637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.606820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.606849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.607049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.607080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.607276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.607306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.607558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.607588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.607741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.607778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.608044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.608073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.608257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.608288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.608421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.608450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.608631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.608663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.608870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.608900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.609085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.609114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.609305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.609319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.609481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.609522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.609744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.609774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.609973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.610003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.610255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.610285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.610412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.610441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.610686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.610718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.610913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.610923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.611062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.611092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.611234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.611264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.611447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.611476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.611678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.611708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.611892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.621 [2024-07-24 18:22:18.611922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.621 qpair failed and we were unable to recover it. 00:27:25.621 [2024-07-24 18:22:18.612132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.612162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.612407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.612417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.612573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.612583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.612761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.612791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.612910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.612940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.613055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.613085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.613377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.613407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.613660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.613692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.613955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.613985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.614110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.614120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.614301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.614331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.614544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.614575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.614724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.614754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.614961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.614991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.615133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.615162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.615371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.615401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.615534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.615566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.615788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.615824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.615925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.615935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.616168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.616198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.616382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.616417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.616699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.616731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.617004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.617033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.617211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.617221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.617455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.617485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.617647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.617678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.617936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.617966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.618217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.618247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.618383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.618412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.618614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.618645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.618844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.622 [2024-07-24 18:22:18.618874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.622 qpair failed and we were unable to recover it. 00:27:25.622 [2024-07-24 18:22:18.619125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.619156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.619297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.619326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.619608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.619639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.619951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.619981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.620186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.620216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.620360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.620389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.620641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.620673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.620799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.620829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.621011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.621042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.621289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.621299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.621369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.621379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.621601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.621611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.621760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.621770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.621909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.621938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.622145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.622175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.622455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.622485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.622644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.622674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.622873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.622902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.623034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.623044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.623263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.623292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.623475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.623516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.623764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.623794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.623958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.623988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.624180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.624210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.624404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.624413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.624581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.624612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.624750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.624781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.624973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.625003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.625199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.625230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.625434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.625470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.625603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.625633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.625906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.625936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.626160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.626190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.626328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.626337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.626495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.626505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.626736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.626746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.626928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.626958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.623 [2024-07-24 18:22:18.627082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.623 [2024-07-24 18:22:18.627112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.623 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.627364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.627394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.627598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.627629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.627878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.627907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.628096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.628106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.628295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.628325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.628538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.628569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.628754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.628783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.628990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.629019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.629184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.629194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.629278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.629288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.629363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.629393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.629673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.629704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.629850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.629880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.630059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.630089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.630350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.630360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.630576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.630585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.630738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.630768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.631039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.631069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.631216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.631246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.631428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.631458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.631595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.631626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.631848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.631878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.632131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.632162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.632293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.632324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.632527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.632559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.632690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.632720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.632925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.632955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.633084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.633114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.633338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.633368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.633619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.633650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.633930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.633960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.634163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.634198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.634475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.634518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.634715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.634745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.634942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.634976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.635210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.635220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.635377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.635387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.624 [2024-07-24 18:22:18.635598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.624 [2024-07-24 18:22:18.635608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.624 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.635691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.635702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.635854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.635863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.636091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.636100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.636310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.636319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.636418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.636447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.636585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.636616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.636760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.636790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.636926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.636955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.637101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.637131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.637308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.637317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.637399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.637408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.637637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.637647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.637800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.637810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.637892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.637902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.638070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.638079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.638235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.638245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.638467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.638477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.638671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.638681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.638822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.638848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.638996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.639026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.639226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.639256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.639502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.639512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.639735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.639745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.639906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.639916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.625 [2024-07-24 18:22:18.640013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.625 [2024-07-24 18:22:18.640022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.625 qpair failed and we were unable to recover it. 00:27:25.909 [2024-07-24 18:22:18.640174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.909 [2024-07-24 18:22:18.640183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.909 qpair failed and we were unable to recover it. 00:27:25.909 [2024-07-24 18:22:18.640258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.909 [2024-07-24 18:22:18.640268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.909 qpair failed and we were unable to recover it. 00:27:25.909 [2024-07-24 18:22:18.640415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.909 [2024-07-24 18:22:18.640426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.909 qpair failed and we were unable to recover it. 00:27:25.909 [2024-07-24 18:22:18.640584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.640594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.640690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.640699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.640804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.640814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.640954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.640964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.641951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.641960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.642033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.642043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.642259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.642269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.642442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.642452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.642659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.642669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.642874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.642883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.643946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.643955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.644106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.644116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.644204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.644213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.644377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.644387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.644476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.644485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.644565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.644575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.644797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.644808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.644944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.644954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.645028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.645038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.645120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.645129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.645298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.645334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.645608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.645640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.645769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.645798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.910 [2024-07-24 18:22:18.645926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.910 [2024-07-24 18:22:18.645956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.910 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.646144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.646175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.646426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.646456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.646597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.646628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.646828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.646858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.647061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.647090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.647207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.647237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.647349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.647378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.647661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.647698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.647842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.647872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.648144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.648175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.648275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.648286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.648423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.648432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.648629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.648639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.648736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.648745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.648898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.648908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.649041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.649050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.649191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.649224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.649436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.649466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.649609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.649640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.649837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.649866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.650143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.650174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.650436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.650465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.650654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.650722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.650992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.651026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.651234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.651265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.651462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.651505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.651761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.651792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.651926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.651956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.652100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.652129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.652340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.652369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.652513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.652545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.652751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.652782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.652925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.652955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.653086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.653115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.653373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.653385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.653608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.653618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.653802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.911 [2024-07-24 18:22:18.653811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.911 qpair failed and we were unable to recover it. 00:27:25.911 [2024-07-24 18:22:18.653897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.653907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.653984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.653993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.654083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.654093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.654194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.654224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.654417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.654447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.654605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.654636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.654887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.654917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.655128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.655137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.655295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.655325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.655539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.655570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.655820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.655856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.655987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.656016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.656218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.656248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.656373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.656403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.656519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.656529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.656683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.656693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.656898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.656907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.657042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.657051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.657262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.657292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.657486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.657526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.657677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.657707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.657886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.657915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.658184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.658214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.658342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.658384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.658475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.658486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.658625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.658651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.658836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.658865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.659141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.659171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.659297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.659307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.659381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.659391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.659548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.659579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.659767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.659796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.660015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.660045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.660246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.660276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.660404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.660435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.660571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.660601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.660798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.660828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.912 qpair failed and we were unable to recover it. 00:27:25.912 [2024-07-24 18:22:18.661125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.912 [2024-07-24 18:22:18.661193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.661350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.661383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.661585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.661619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.661827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.661857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.662066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.662096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.662238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.662268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.662473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.662519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.662839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.662870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.663146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.663177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.663478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.663518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.663741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.663771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.663975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.664004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.664200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.664215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.664393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.664414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.664524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.664540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.664700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.664715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.664881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.664897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.665047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.665063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.665242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.665254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.665432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.665442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.665536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.665546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.665698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.665708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.665873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.665883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.666049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.666059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.666139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.666149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.666291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.666300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.666385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.666395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.666571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.666600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.666799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.666829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.666955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.666985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.667133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.667163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.667347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.667377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.667514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.667544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.667756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.667787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.667970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.668000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.668116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.668145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.668364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.668374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.668526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.913 [2024-07-24 18:22:18.668557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.913 qpair failed and we were unable to recover it. 00:27:25.913 [2024-07-24 18:22:18.668739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.668768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.668962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.668992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.669194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.669224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.669364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.669394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.669546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.669577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.669726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.669756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.669888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.669919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.670113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.670143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.670278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.670315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.670391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.670401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.670542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.670553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.670666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.670676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.670827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.670858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.671005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.671035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.671160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.671190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.671398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.671428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.671649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.671680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.671931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.671960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.672097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.672128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.672311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.672340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.672471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.672512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.672697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.672727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.672870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.672900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.673039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.673068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.673205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.673235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.673422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.673452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.673660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.673692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.673873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.673902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.674110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.674139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.674348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.914 [2024-07-24 18:22:18.674357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.914 qpair failed and we were unable to recover it. 00:27:25.914 [2024-07-24 18:22:18.674516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.674547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.674745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.674775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.674907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.674936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.675113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.675122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.675204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.675213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.675291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.675301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.675399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.675409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.675562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.675572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.675716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.675726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.675874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.675904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.676016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.676047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.676250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.676280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.676470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.676481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.676671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.676702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.677011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.677041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.677160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.677190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.677316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.677326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.677430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.677440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.677655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.677686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.677837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.677867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.678055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.678085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.678298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.678328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.678453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.678483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.678768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.678799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.678929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.678958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.679212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.679241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.679368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.679398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.679526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.679558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.679752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.679782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.679981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.680011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.680232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.680262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.680443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.680453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.680556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.680566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.680716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.680726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.680812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.680821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.680901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.680911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.681048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.681057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.915 qpair failed and we were unable to recover it. 00:27:25.915 [2024-07-24 18:22:18.681209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.915 [2024-07-24 18:22:18.681219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.681290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.681300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.681401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.681411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.681578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.681609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.681750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.681781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.681976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.682195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.682290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.682441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.682533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.682701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.682818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.682918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.682928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.683071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.683097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.683225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.683255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.683386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.683426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.683636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.683667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.683799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.683829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.684109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.684138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.684251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.684281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.684501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.684532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.684659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.684688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.684879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.684909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.685094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.685104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.685293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.685323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.685559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.685590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.685837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.685868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.685985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.686014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.686195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.686225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.686507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.686518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.686678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.686688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.686757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.686767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.686933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.686943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.687039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.687049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.687218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.687227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.687389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.687419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.687698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.687730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.687935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.916 [2024-07-24 18:22:18.687964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.916 qpair failed and we were unable to recover it. 00:27:25.916 [2024-07-24 18:22:18.688202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.688212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.688381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.688391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.688551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.688581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.688823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.688853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.688985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.689015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.689211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.689242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.689455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.689465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.689609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.689640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.689790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.689820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.690005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.690035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.690254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.690284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.690510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.690542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.690748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.690778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.690992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.691022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.691220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.691250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.691434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.691465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.691761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.691797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.692029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.692051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.692202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.692217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.692408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.692438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.692651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.692683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.692875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.692905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.693179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.693208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.693485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.693525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.693708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.693738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.693867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.693897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.694106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.694121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.694337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.694352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.694447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.694463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.694582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.694597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.694773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.694788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.694966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.694981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.695133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.695148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.695338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.695353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.695449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.695463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.695689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.695720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.695919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.695949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.696222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.917 [2024-07-24 18:22:18.696251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.917 qpair failed and we were unable to recover it. 00:27:25.917 [2024-07-24 18:22:18.696396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.696426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.696648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.696680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.696878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.696907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.697028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.697057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.697357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.697388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.697578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.697609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.697865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.697895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.698032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.698062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.698191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.698221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.698405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.698435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.698680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.698695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.698938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.698952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.699137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.699152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.699247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.699276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.699408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.699437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.699589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.699620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.699757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.699787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.700063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.700093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.700215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.700245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.700422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.700440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.700538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.700553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.700717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.700732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.700873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.700887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.701136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.701166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.701305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.701335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.701471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.701509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.701738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.701768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.702016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.702046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.702327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.702357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.702503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.702534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.702735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.702764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.702910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.702940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.703216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.703246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.703451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.703481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.703630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.703660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.703806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.703836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.704055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.704084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.704225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.918 [2024-07-24 18:22:18.704255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.918 qpair failed and we were unable to recover it. 00:27:25.918 [2024-07-24 18:22:18.704512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.704543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.704753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.704783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.704967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.704997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.705200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.705230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.705414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.705444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.705648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.705678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.705817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.705847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.706028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.706058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.706313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.706328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.706497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.706512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.706752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.706767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.706878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.706892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.707059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.707088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.707231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.707260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.707367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.707397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.707679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.707710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.707894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.707924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.708067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.708096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.708295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.708325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.708442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.708472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.708670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.708685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.708794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.708830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.708961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.708990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.709179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.709209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.709356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.709371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.709538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.709553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.709645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.709660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.709751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.709766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.709951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.709980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.710111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.710141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.919 qpair failed and we were unable to recover it. 00:27:25.919 [2024-07-24 18:22:18.710325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.919 [2024-07-24 18:22:18.710340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.710432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.710447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.710672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.710687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.710854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.710869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.711083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.711097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.711270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.711301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.711488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.711525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.711723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.711752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.712013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.712043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.712229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.712259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.712461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.712497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.712691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.712706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.712812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.712826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.712998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.713013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.713199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.713229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.713416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.713446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.713714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.713745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.713930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.713960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.714101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.714116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.714220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.714235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.714406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.714421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.714591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.714623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.714905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.714935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.715133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.715148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.715295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.715327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.715514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.715544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.715670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.715699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.715953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.715983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.716241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.716271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.716482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.716519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.716724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.716753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.716881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.716917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.717192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.717221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.717505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.717535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.717788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.717818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.717999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.718028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.718177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.718206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.718456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.920 [2024-07-24 18:22:18.718471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.920 qpair failed and we were unable to recover it. 00:27:25.920 [2024-07-24 18:22:18.718649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.718664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.718756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.718771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.718952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.718966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.719068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.719098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.719314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.719343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.719474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.719513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.719647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.719677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.719895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.719925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.720045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.720075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.720299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.720329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.720597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.720612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.720720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.720734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.720976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.721005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.721131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.721161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.721359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.721388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.721586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.721601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.721779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.721808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.722062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.722091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.722279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.722309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.722522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.722554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.722703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.722746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.722910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.722924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.723098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.723127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.723399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.723429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.723573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.723604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.723804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.723833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.723964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.723994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.724246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.724276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.724475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.724514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.724652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.724681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.724978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.725007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.725192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.725476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.725515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.725704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.725739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.725873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.725902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.726054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.726084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.726222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.726251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.921 [2024-07-24 18:22:18.726362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.921 [2024-07-24 18:22:18.726392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.921 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.726598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.726628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.726879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.726908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.727029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.727059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.727197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.727226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.727352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.727382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.727661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.727692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.727890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.727919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.728189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.728228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.728450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.728464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.728569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.728584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.728816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.728845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.728972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.729001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.729262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.729291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.729483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.729502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.729673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.729687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.729930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.729960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.730102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.730132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.730406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.730436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.730642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.730672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.730857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.730887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.731089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.731118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.731333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.731348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.731450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.731500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.731762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.731792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.731985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.732014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.732207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.732238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.732446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.732475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.732662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.732678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.732835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.732864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.733135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.733165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.733363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.733393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.733538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.733569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.733841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.733871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.734070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.734100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.734249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.734263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.734356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.734374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.734607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.922 [2024-07-24 18:22:18.734623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.922 qpair failed and we were unable to recover it. 00:27:25.922 [2024-07-24 18:22:18.734796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.734811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.734913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.734942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.735086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.735116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.735306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.735336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.735613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.735643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.735837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.735867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.736089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.736119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.736262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.736291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.736570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.736586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.736758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.736772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.736861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.736897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.737202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.737232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.737388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.737418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.737573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.737604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.737744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.737774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.738008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.738038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.738223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.738253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.738385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.738415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.738600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.738615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.738773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.738788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.739003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.739018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.739172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.739187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.739346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.739361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.739449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.739463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.739621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.739651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.739825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.739894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.740115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.740147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.740335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.740345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.740534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.740566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.740751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.740781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.741047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.923 [2024-07-24 18:22:18.741077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.923 qpair failed and we were unable to recover it. 00:27:25.923 [2024-07-24 18:22:18.741311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.741321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.741422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.741452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.741725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.741756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.742032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.742062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.742264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.742295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.742496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.742506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.742742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.742772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.742923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.742962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.743097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.743127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.743255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.743286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.743486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.743528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.743672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.743702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.743975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.744006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.744204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.744235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.744415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.744425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.744523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.744533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.744681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.744690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.744879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.744909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.745114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.745144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.745399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.745440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.745584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.745615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.745823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.745854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.746058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.746088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.746400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.746429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.746698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.746729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.746920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.746950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.747201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.747230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.747415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.747424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.747517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.747527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.747701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.747710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.747867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.747877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.747989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.748019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.748140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.748170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.748354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.748385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.748512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.748542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.748748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.748758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.748913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.748923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.749006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.924 [2024-07-24 18:22:18.749016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.924 qpair failed and we were unable to recover it. 00:27:25.924 [2024-07-24 18:22:18.749167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.749197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.749330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.749360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.749563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.749594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.749791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.749821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.750011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.750041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.750257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.750287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.750431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.750441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.750674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.750705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.750978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.751008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.751129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.751164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.751346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.751377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.751580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.751590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.751661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.751672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.751783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.751812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.752005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.752035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.752177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.752207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.752488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.752530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.752677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.752708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.752980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.753011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.753246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.753276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.753470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.753479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.753698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.753709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.753835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.753865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.754092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.754122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.754320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.754349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.754488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.754502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.754692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.754722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.755003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.755033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.755290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.755321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.755451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.755461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.755566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.755576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.755677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.755687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.755828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.755838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.755917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.755927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.756005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.756015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.756099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.756108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.756205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.925 [2024-07-24 18:22:18.756215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.925 qpair failed and we were unable to recover it. 00:27:25.925 [2024-07-24 18:22:18.756300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.756310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.756400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.756410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.756504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.756514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.756604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.756614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.756825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.756855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.757041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.757071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.757272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.757301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.757512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.757538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.757693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.757703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.757781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.757791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.757944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.757954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.758115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.758145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.758274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.758309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.758526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.758559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.758750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.758760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.758822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.758831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.758920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.758950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.759161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.759191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.759409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.759440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.759537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.759548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.759685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.759695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.759830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.759839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.760054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.760084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.760258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.760288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.760538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.760548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.760606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.760616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.760754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.760764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.760867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.760877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.761088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.761118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.761343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.761374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.761556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.761589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.761823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.761833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.761916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.761926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.762122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.762151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.762378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.762408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.762603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.762640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.762732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.762742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.762977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.926 [2024-07-24 18:22:18.763007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.926 qpair failed and we were unable to recover it. 00:27:25.926 [2024-07-24 18:22:18.763282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.763313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.763515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.763547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.763743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.763774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.763902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.763932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.764209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.764244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.764310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.764319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.764485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.764499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.764696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.764726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.764996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.765025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.765299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.765329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.765528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.765571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.765729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.765739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.765940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.765970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.766243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.766274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.766456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.766522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.766673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.766703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.766908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.766938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.767209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.767240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.767480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.767523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.767660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.767690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.767871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.767901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.768033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.768064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.768272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.768302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.768505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.768536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.768676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.768686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.768833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.768843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.769033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.769042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.769193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.769203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.769382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.769392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.769462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.769472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.769578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.769588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.769758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.769768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.769908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.769918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.770055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.770065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.770134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.770144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.770243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.770252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.770418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.770448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.927 [2024-07-24 18:22:18.770659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.927 [2024-07-24 18:22:18.770690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.927 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.770823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.770853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.771131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.771161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.771299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.771329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.771575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.771644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.771790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.771823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.771966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.771997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.772135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.772165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.772361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.772391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.772646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.772679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.772824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.772854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.773074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.773104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.773302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.773332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.773557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.773588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.773781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.773811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.773958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.773988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.774183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.774213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.774342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.774372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.774656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.774689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.774837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.774866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.775052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.775082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.775280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.775309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.775529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.775560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.775707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.775742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.775906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.775920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.776085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.776114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.776334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.776363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.776551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.776582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.776777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.776792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.776983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.777013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.777151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.777180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.928 qpair failed and we were unable to recover it. 00:27:25.928 [2024-07-24 18:22:18.777478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.928 [2024-07-24 18:22:18.777523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.777721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.777750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.777885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.777914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.778189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.778219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.778344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.778374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.778572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.778615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.778784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.778799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.778903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.778933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.779184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.779213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.779408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.779437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.779565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.779580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.779748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.779763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.779873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.779888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.779993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.780007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.780170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.780185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.780358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.780373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.780575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.780605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.780737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.780767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.780910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.780940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.781085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.781115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.781226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.781255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.781513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.781544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.781662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.781810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.781839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.782115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.782145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.782276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.782306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.782556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.782587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.782778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.782813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.783090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.783119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.783396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.783426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.783678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.783708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.783859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.783888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.784083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.784113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.784302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.784346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.784511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.784526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.784611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.784626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.784717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.784732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.784815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.929 [2024-07-24 18:22:18.784829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.929 qpair failed and we were unable to recover it. 00:27:25.929 [2024-07-24 18:22:18.784994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.785008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.785175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.785205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.785403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.785432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.785729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.785761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.785898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.785927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.786199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.786228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.786428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.786443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.786610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.786626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.786809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.786838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.787044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.787073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.787212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.787242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.787464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.787501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.787711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.787726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.787827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.787857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.788114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.788143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.788283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.788312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.788561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.788576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.788753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.788783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.788969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.788998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.789251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.789280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.789485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.789504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.789734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.789765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.789967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.789997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.790244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.790273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.790394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.790409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.790519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.790534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.790781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.790810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.791029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.791058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.791249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.791278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.791474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.791489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.791605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.791620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.791735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.791750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.791839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.791854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.791961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.791976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.792075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.792089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.792221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.792252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.792434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.792463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.792700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.930 [2024-07-24 18:22:18.792768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.930 qpair failed and we were unable to recover it. 00:27:25.930 [2024-07-24 18:22:18.792993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.793027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.793320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.793351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.793534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.793567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.793755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.793795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.793979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.793989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.794147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.794177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.794325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.794355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.794609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.794649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.794746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.794755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.794890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.794900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.795059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.795089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.795287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.795316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.795462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.795472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.795561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.795571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.795806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.795835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.796077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.796107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.796370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.796400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.796674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.796705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.796908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.796939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.797221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.797251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.797452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.797482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.797658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.797668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.797918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.797948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.798165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.798195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.798392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.798421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.798634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.798665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.798871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.798901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.799095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.799125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.799330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.799360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.799507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.799538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.799667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.799697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.799924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.799954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.800141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.800176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.800320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.800351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.800504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.800536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.800725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.800755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.801032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.801062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.931 [2024-07-24 18:22:18.801260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.931 [2024-07-24 18:22:18.801290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.931 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.801436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.801466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.801724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.801754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.801890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.801920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.802130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.802160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.802377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.802406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.802602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.802633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.802832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.802861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.803053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.803083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.803290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.803320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.803524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.803555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.803686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.803717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.803978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.804008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.804150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.804180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.804428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.804458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.804590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.804622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.804770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.804799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.804985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.805015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.805227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.805257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.805450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.805480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.805713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.805744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.805996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.806025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.806220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.806250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.806447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.806457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.806545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.806555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.806726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.806736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.806968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.806999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.807209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.807239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.807500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.807530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.807741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.807771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.808066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.808095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.808234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.808264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.808465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.808510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.808767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.808797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.808996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.809026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.809247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.809282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.809398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.809407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.809561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.932 [2024-07-24 18:22:18.809590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.932 qpair failed and we were unable to recover it. 00:27:25.932 [2024-07-24 18:22:18.809848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.809878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.810092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.810122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.810252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.810282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.810462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.810501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.810701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.810730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.810914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.810944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.811212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.811242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.811426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.811456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.811736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.811746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.811847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.811857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.812063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.812072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.812183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.812192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.812352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.812362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.812439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.812470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.812676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.812707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.812890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.812920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.813121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.813150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.813376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.813406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.813595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.813606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.813706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.813715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.813864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.813875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.813969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.813995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.814190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.814220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.814349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.814379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.814517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.814551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.814651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.814662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.814748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.814757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.814855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.814865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.815035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.815045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.815189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.933 [2024-07-24 18:22:18.815199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.933 qpair failed and we were unable to recover it. 00:27:25.933 [2024-07-24 18:22:18.815296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.815306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.815520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.815530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.815670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.815680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.815863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.815872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.816000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.816009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.816095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.816121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.816329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.816359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.816583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.816620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.816786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.816796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.816978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.817007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.817287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.817317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.817581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.817612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.817835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.817866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.818005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.818035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.818285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.818315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.818506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.818516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.818601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.818611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.818862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.818892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.819026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.819056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.819184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.819213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.819395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.819425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.819649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.819659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.819798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.819808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.819969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.819999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.820114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.820144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.820352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.820382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.820510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.820541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.820664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.820674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.820813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.820823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.820992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.821002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.821203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.821232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.821415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.821444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.821707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.821738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.821929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.821959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.822169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.822199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.822471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.822511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.822709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.822749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.934 qpair failed and we were unable to recover it. 00:27:25.934 [2024-07-24 18:22:18.822963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.934 [2024-07-24 18:22:18.822973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.823180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.823190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.823329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.823339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.823477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.823487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.823640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.823650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.823812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.823841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.823981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.824011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.824287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.824316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.824510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.824520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.824605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.824616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.824755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.824767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.824837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.824846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.824947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.824956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.825043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.825053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.825212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.825222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.825376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.825406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.825605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.825635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.825818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.825848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.825972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.826002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.826184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.826214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.826397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.826426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.826638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.826669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.826946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.826976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.827171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.827200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.827335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.827365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.827513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.827523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.827749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.827759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.827967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.827976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.828210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.828219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.828300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.828320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.828419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.828429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.828528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.828538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.828632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.828642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.828712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.828722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.828876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.828885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.829121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.829130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.829227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.829237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.829374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.935 [2024-07-24 18:22:18.829400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.935 qpair failed and we were unable to recover it. 00:27:25.935 [2024-07-24 18:22:18.829651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.829683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.829825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.829855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.829974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.829984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.830158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.830198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.830386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.830416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.830609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.830640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.830827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.830837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.831073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.831103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.831350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.831380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.831536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.831546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.831721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.831730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.831908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.831937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.832109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.832145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.832284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.832314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.832519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.832529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.832705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.832734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.832955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.832985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.833235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.833264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.833408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.833437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.833585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.833616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.833801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.833830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.834085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.834115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.834236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.834266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.834451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.834481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.834721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.834751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.834943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.834953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.835039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.835049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.835262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.835291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.835437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.835468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.835671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.835713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.835793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.835803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.836008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.836018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.836169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.836179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.836320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.836330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.836480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.836496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.836651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.836661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.836813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.836822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.936 [2024-07-24 18:22:18.836969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.936 [2024-07-24 18:22:18.836979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.936 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.837163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.837194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.837332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.837363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.837607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.837648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.837814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.837824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.837897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.837930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.838177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.838207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.838395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.838425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.838572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.838604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.838790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.838820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.838966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.838997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.839270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.839300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.839484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.839521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.839674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.839705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.839891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.839921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.840131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.840166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.840366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.840395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.840641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.840671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.840809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.840838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.841034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.841064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.841196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.841225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.841428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.841459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.841606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.841616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.841855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.841885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.842004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.842034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.842164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.842194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.842468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.842507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.842672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.842682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.842763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.842806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.843087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.843116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.843360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.843390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.937 [2024-07-24 18:22:18.843576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.937 [2024-07-24 18:22:18.843607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.937 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.843827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.843837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.844034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.844064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.844244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.844273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.844465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.844523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.844794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.844824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.844998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.845008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.845164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.845195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.845429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.845458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.845708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.845719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.845822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.845833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.845971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.846004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.846207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.846237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.846395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.846425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.846672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.846683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.846766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.846797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.847075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.847105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.847318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.847349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.847543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.847581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.847715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.847745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.847897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.847907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.848005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.848015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.848228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.848258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.848401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.848431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.848572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.848596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.848826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.848837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.848921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.848931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.849092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.849135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.849342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.849372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.849484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.849523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.849631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.849660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.849828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.849838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.849985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.850015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.850298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.850328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.850514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.850546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.850753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.850783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.850982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.851012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.851161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.938 [2024-07-24 18:22:18.851192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.938 qpair failed and we were unable to recover it. 00:27:25.938 [2024-07-24 18:22:18.851336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.851366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.851661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.851671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.851829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.851840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.852050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.852079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.852333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.852363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.852660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.852671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.852813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.852823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.852973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.853004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.853121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.853151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.853300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.853330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.853572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.853583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.853793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.853824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.854010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.854040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.854227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.854257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.854456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.854486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.854732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.854762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.854975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.855005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.855149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.855158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.855314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.855324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.855407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.855417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.855626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.855637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.855755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.855764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.855863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.855872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.856009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.856018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.856089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.856099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.856252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.856262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.856421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.856457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.856646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.856677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.856876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.856906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.857046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.857056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.857135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.857144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.857378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.857388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.857487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.857505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.857745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.857774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.857888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.857918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.858169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.858198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.858501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.858532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.939 [2024-07-24 18:22:18.858763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.939 [2024-07-24 18:22:18.858773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.939 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.858927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.858937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.859032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.859042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.859145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.859155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.859266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.859296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.859428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.859457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.859745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.859776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.859928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.859959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.860215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.860245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.860379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.860409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.860610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.860639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.860797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.860806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.861018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.861048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.861191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.861221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.861352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.861382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.861510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.861541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.861719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.861857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.861867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.862093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.862103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.862355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.862385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.862596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.862626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.862743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.862773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.862971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.863001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.863202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.863232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.863486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.863524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.863642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.863652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.863874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.863904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.864033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.864063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.864314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.864344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.864480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.864501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.864639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.864649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.864735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.864744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.864816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.864826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.865064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.865094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.865231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.865261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.865387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.865417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.865558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.865568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.865648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.940 [2024-07-24 18:22:18.865658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.940 qpair failed and we were unable to recover it. 00:27:25.940 [2024-07-24 18:22:18.865811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.865840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.866108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.866138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.866331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.866361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.866488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.866502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.866593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.866603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.866754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.866764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.866836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.866846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.866940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.866950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.867095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.867121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.867332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.867362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.867556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.867566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.867863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.867893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.868101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.868131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.868349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.868379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.868566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.868597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.868803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.868833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.869029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.869058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.869282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.869311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.869562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.869631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.869860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.869893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.870185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.870201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.870419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.870434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.870620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.870636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.870893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.870923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.871120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.871150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.871281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.871312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.871526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.871557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.871776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.871806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.872057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.872086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.872230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.872260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.872562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.872593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.872812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.872832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.873078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.873108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.873240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.873270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.873400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.873431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.873690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.873705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.873879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.941 [2024-07-24 18:22:18.873910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.941 qpair failed and we were unable to recover it. 00:27:25.941 [2024-07-24 18:22:18.874037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.874067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.874331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.874361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.874563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.874594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.874729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.874760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.874894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.874933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.875032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.875044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.875264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.875294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.875584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.875614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.875773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.875804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.876005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.876035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.876244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.876274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.876484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.876522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.876769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.876779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.876963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.876973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.877128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.877158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.877374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.877403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.877605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.877636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.877843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.877852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.877928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.877938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.878141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.878171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.878451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.878480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.878733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.878770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.878931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.878947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.879180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.879196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.879415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.879446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.879726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.879758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.879912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.879943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.880076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.880106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.880367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.880397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.880621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.880653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.880770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.880799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.942 qpair failed and we were unable to recover it. 00:27:25.942 [2024-07-24 18:22:18.881066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.942 [2024-07-24 18:22:18.881096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.881293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.881323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.881469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.881507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.881706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.881736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.881883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.881914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.882211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.882241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.882503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.882547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.882716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.882731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.882903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.882933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.883087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.883117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.883391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.883430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.883636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.883648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.883810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.883819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.884001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.884031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.884153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.884183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.884375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.884405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.884534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.884564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.884821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.884856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.885078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.885088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.885250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.885280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.885528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.885559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.885771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.885801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.885938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.885968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.886109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.886138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.886334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.886363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.886547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.886577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.886733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.886763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.887037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.887066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.887293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.887322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.887453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.887482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.887738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.887768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.887991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.888021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.888220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.888249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.888390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.888419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.888654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.888684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.888822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.888851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.888994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.889023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.889299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.943 [2024-07-24 18:22:18.889327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.943 qpair failed and we were unable to recover it. 00:27:25.943 [2024-07-24 18:22:18.889455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.889484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.889615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.889645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.889896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.889906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.890137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.890147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.890351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.890361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.890510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.890520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.890636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.890666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.890885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.890915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.891113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.891142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.891336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.891365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.891485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.891524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.891654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.891684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.891877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.891907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.892075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.892084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.892234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.892244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.892435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.892465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.892604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.892634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.892775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.892805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.892928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.892958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.893129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.893140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.893349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.893380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.893590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.893620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.893827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.893857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.893995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.894025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.894174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.894204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.894418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.894448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.894631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.894662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.894792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.894823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.895004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.895035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.895231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.895261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.895501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.895532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.895664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.895674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.895837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.895847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.895921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.895945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.896084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.896113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.896371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.896400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.896596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.896627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.944 [2024-07-24 18:22:18.896772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.944 [2024-07-24 18:22:18.896802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.944 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.896959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.896989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.897181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.897210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.897340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.897370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.897509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.897540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.897812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.897842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.898038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.898068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.898199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.898246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.898531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.898562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.898715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.898745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.898942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.898952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.899025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.899035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.899317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.899347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.899469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.899508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.899640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.899671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.899853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.899884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.900190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.900220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.900487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.900528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.900772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.900782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.900988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.900998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.901153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.901163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.901338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.901349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.901506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.901527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.901699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.901729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.901879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.901908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.902117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.902147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.902292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.902322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.902522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.902553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.902676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.902686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.902752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.902762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.902943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.902973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.903160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.903189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.903407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.903437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.903702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.903733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.903930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.903960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.904155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.904164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.904295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.904325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.904530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.904560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.945 qpair failed and we were unable to recover it. 00:27:25.945 [2024-07-24 18:22:18.904751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.945 [2024-07-24 18:22:18.904781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.904968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.904977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.905046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.905056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.905139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.905182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.905349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.905379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.905600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.905631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.905849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.905858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.906010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.906020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.906113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.906123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.906280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.906290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.906535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.906566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.906710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.906741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.906951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.906981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.907242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.907252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.907473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.907515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.907767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.907797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.907927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.907937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.908157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.908166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.908271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.908281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.908366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.908376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.908579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.908589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.908675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.908685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.908898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.908927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.909121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.909150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.909332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.909368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.909645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.909675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.909819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.909849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.910054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.910084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.910300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.910329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.910527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.910557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.910690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.910728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.910803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.910813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.910909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.910919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.911040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.911069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.911338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.911367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.911554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.911585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.911799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.911809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.911879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.946 [2024-07-24 18:22:18.911888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.946 qpair failed and we were unable to recover it. 00:27:25.946 [2024-07-24 18:22:18.912116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.912126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.912276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.912286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.912506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.912537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.912667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.912697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.912893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.912933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.913082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.913092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.913180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.913189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.913338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.913347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.913519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.913529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.913618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.913628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.913769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.913778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.913999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.914078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.914249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.914400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.914511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.914669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.914752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.914982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.914992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.915921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.915957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.916148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.916177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.916298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.916328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.916603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.916634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.916782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.916813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.947 qpair failed and we were unable to recover it. 00:27:25.947 [2024-07-24 18:22:18.916992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.947 [2024-07-24 18:22:18.917023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.917200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.917230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.917429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.917459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.917755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.917787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.917931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.917962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.918165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.918196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.918387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.918416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.918626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.918656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.918863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.918893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.919043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.919074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.919196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.919226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.919422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.919453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.919662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.919694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.919913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.919943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.920132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.920162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.920298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.920329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.920581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.920612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.920809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.920839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.921046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.921077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.921229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.921260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.921374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.921404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.921603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.921634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.921964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.921994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.922132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.922162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.922285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.922315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.922579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.922610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.922789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.922800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.922944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.922974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.923171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.923200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.923327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.923356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.923533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.923563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.923831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.923861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.924119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.924148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.924285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.924315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.924429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.924460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.924668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.924681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.924759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.924769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.924990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.948 [2024-07-24 18:22:18.925020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.948 qpair failed and we were unable to recover it. 00:27:25.948 [2024-07-24 18:22:18.925204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.925234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.925363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.925393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.925636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.925646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.925834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.925863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.926082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.926112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.926387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.926417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.926563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.926594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.926799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.926830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.926952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.926963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.927204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.927234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.927378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.927408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.927658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.927690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.927885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.927896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.928047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.928078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.928277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.928307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.928590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.928621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.928812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.928841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.928971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.929001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.929165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.929175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.929421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.929450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.929723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.929754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.930015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.930045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.930256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.930286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.930474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.930511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.930668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.930698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.930915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.930945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.931091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.931317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.931346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.931550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.931581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.931774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.931804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.932001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.932012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.932173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.932203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.932387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.932417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.932576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.932606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.932818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.932848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.933054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.933083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.933285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.933315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.933534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.949 [2024-07-24 18:22:18.933570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.949 qpair failed and we were unable to recover it. 00:27:25.949 [2024-07-24 18:22:18.933762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.933771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.933913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.933922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.934043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.934053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.934224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.934234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.934380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.934409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.934601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.934632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.934825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.934855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.934979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.934988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.935266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.935296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.935630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.935661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.935850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.935880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.936084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.936114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.936385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.936415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.936623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.936654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.936774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.936804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.936948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.936958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.937060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.937070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.937222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.937233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.937326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.937337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.937483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.937519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.937714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.937743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.937966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.937997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.938131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.938142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.938220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.938231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.938305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.938317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.938425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.938455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.938796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.938866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.939128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.939144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.939324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.939340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.939566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.939599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.939760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.939775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.939933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.939948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.940105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.940120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.940260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.940290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.940480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.940518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.940702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.940732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.940929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.940944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.950 [2024-07-24 18:22:18.941101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.950 [2024-07-24 18:22:18.941130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.950 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.941431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.941462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.941669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.941701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.941911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.941942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.942084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.942113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.942243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.942274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.942467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.942510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.942742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.942772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.942900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.942930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.943050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.943080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.943208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.943238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.943368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.943398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.943581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.943613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.943887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.943917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.944100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.944131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.944352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.944382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.944539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.944573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.944854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.944884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.945016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.945026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.945165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.945175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.945373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.945384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.945552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.945562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.945668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.945700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.945837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.945867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.946017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.946047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.946253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.946283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.946478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.946517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.946711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.946741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.946910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.946940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.947171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.947208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.947413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.947443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.947755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.947787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.951 [2024-07-24 18:22:18.947931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.951 [2024-07-24 18:22:18.947962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.951 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.948246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.948277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.948559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.948591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.948850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.948860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.948965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.948994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.949152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.949182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.949482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.949541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.949809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.949839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.950100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.950130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.950333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.950362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.950611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.950643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.950895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.950905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.951083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.951113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.951268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.951299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.951526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.951558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.951752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.951782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.951987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.952017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.952219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.952249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.952511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.952542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.952737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.952768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.953019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.953049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.953247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.953277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.953528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.953560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.953700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.953730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.953978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.954021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.954134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.954144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.954366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.954376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.954635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.954645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.954822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.954832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.954988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.954998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.955226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.955235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.955426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.955436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.955576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.955589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.955723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.955733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.955887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.955897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.955988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.955998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.956167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.956197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.956342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.952 [2024-07-24 18:22:18.956372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.952 qpair failed and we were unable to recover it. 00:27:25.952 [2024-07-24 18:22:18.956618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.956649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.956832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.956867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.957024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.957034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.957191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.957201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.957442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.957471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.957649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.957660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.957761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.957771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.957882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.957912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.958239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.958270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.958482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.958524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.958654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.958684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.958957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.958988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.959275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.959304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.959594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.959627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.959878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.959909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.960066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.960096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.960322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.960331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.960488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.960527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.960715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.960745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.960862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.960891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.961137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.961147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.961346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.961356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.961436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.961445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.961617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.961627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.961832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.961862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.962092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.962122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.962319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.962354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.962516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.962546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.962740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.962769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.962952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.962982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.963120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.963150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.963395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.963405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.963559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.963570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.963733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.963742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.963905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.963935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.964139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.964169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.964439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.964469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.953 [2024-07-24 18:22:18.964664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.953 [2024-07-24 18:22:18.964695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.953 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.964972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.965002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.965356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.965366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.965577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.965587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.965746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.965756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.965964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.965994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.966198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.966228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.966364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.966393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.966537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.966567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.966711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.966742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.966877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.966906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.967037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.967066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.967277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.967287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:25.954 [2024-07-24 18:22:18.967435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.954 [2024-07-24 18:22:18.967444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:25.954 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.967646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.967656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.967753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.967763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.967972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.967982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.968067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.968076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.968344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.968355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.968513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.968523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.968693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.968702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.968837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.968847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.968938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.968947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.969920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.969929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.970035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.970046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.970319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.970328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.970418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.970428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.970616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.970627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.970805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.970815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.970973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.970983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.971081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.971091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.971318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.971328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.971502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.971512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.971679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.971689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.971863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.971873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.972024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.972034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.972323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.972333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.972591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.972601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.972755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.972764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.972912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.972941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.973214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.973244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.973444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.973475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.973638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.239 [2024-07-24 18:22:18.973669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.239 qpair failed and we were unable to recover it. 00:27:26.239 [2024-07-24 18:22:18.973939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.973968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.974285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.974315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.974590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.974621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.974756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.974786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.974982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.975012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.975235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.975264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.975483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.975522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.975678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.975707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.975920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.975949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.976211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.976240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.976441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.976469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.976679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.976709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.976947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.976976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.977253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.977288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.977507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.977534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.977637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.977647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.977805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.977816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.977969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.977998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.978281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.978310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.978543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.978580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.978776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.978805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.979073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.979083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.979321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.979331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.979567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.979597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.979873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.979903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.980142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.980186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.980391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.980400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.980574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.980583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.980815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.980826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.981019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.981049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.981388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.981418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.981675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.981706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.981878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.981888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.981981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.982013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.982294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.982324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.982544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.982575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.982777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.982807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.240 [2024-07-24 18:22:18.982930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.240 [2024-07-24 18:22:18.982939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.240 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.983081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.983092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.983349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.983394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.983634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.983666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.983905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.983915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.984023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.984032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.984204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.984235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.984438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.984467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.984616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.984649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.984866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.984897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.985045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.985073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.985380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.985410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.985617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.985648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.985841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.985870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.986076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.986086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.986310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.986320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.986418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.986448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.986765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.986797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.986950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.986981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.987232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.987262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.987567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.987598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.987809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.987839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.988015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.988026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.988357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.988387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.988608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.988639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.988889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.988929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.989025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.989036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.989328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.989358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.989578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.989610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.989865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.989894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.990028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.990059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.990260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.990290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.990568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.990600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.990746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.990776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.990962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.990991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.991216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.991227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.991464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.991507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.991760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.991790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.241 qpair failed and we were unable to recover it. 00:27:26.241 [2024-07-24 18:22:18.991946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.241 [2024-07-24 18:22:18.991977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.992176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.992206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.992403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.992432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.992708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.992740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.992879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.992909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.993108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.993118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.993284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.993314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.993514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.993546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.993797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.993827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.994035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.994065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.994264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.994294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.994616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.994648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.994804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.994836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.994994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.995004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.995217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.995248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.995470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.995511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.995709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.995739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.995945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.995974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.996246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.996276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.996453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.996482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.996730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.996761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.996952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.996962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.997125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.997145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.997255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.997287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.997540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.997577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.997798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.997829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.998022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.998052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.998351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.998383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.998608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.998640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.998779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.998809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.999031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.999061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.999371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.999401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.999703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:18.999735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:18.999998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:19.000028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:19.000317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:19.000327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:19.000400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:19.000411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:19.000725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:19.000735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:19.000895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.242 [2024-07-24 18:22:19.000905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.242 qpair failed and we were unable to recover it. 00:27:26.242 [2024-07-24 18:22:19.000979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.000988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.001246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.001275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.001563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.001595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.001881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.001911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.002200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.002230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.002532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.002564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.002839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.002869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.003074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.003104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.003407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.003437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.003725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.003757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.004008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.004037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.004236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.004266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.004504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.004535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.004741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.004770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.005025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.005055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.005196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.005226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.005360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.005390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.005679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.005711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.006016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.006046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.006249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.006279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.006478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.006488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.006758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.006789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.007062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.007092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.007416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.007446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.007604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.007635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.007921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.007955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.008059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.008071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.008314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.008343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.008658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.008689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.008894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.008932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.009187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.243 [2024-07-24 18:22:19.009224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.243 qpair failed and we were unable to recover it. 00:27:26.243 [2024-07-24 18:22:19.009439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.009470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.009760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.009792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.010068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.010095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.010322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.010352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.010558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.010589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.010803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.010834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.011041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.011051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.011167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.011196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.011519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.011551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.011757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.011788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.012040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.012071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.012295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.012325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.012582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.012614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.012870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.012901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.013186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.013216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.013423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.013454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.013696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.013727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.013933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.013964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.014244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.014274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.014460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.014470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.014643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.014675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.014878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.014908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.015218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.015288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.015585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.015622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.015899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.015931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.016160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.016191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.016462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.016502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.016697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.016728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.017002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.017032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.017228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.017258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.017535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.017567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.017716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.017745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.017998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.018029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.018265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.018280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.018398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.018427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.018625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.018666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.018895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.018926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.244 [2024-07-24 18:22:19.019213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.244 [2024-07-24 18:22:19.019225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.244 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.019370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.019380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.019561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.019593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.019840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.019869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.020074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.020104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.020398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.020429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.020650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.020681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.020837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.020867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.021088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.021118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.021392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.021423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.021741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.021772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.021977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.022008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.022202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.022212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.022443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.022453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.022638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.022648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.022809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.022838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.023100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.023129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.023272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.023301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.023453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.023463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.023592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.023602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.023844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.023875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.024018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.024048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.024264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.024295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.024624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.024634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.024895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.024905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.025063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.025093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.025394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.025424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.025654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.025685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.025882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.025912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.026124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.026154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.026347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.026377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.026631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.026662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.026884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.026915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.027166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.027196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.027320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.027330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.027421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.027431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.027605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.027637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.027893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.245 [2024-07-24 18:22:19.027923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.245 qpair failed and we were unable to recover it. 00:27:26.245 [2024-07-24 18:22:19.028124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.028159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.028435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.028465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.028684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.028715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.028984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.029014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.029242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.029272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.029408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.029439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.029607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.029638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.029891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.029921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.030220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.030250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.030539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.030570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.030782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.030812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.030997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.031027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.031360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.031390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.031597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.031629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.031826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.031857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.032011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.032041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.032349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.032379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.032666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.032698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.032922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.032965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.033117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.033127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.033404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.033434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.033619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.033651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.033876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.033906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.034167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.034177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.034388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.034397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.034620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.034631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.034729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.034739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.034927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.034957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.035326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.035356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.035602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.035634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.035791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.035821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.035989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.035999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.036189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.036219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.036422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.036451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.036672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.036703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.036958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.036989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.037266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.037296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.246 [2024-07-24 18:22:19.037508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.246 [2024-07-24 18:22:19.037539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.246 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.037678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.037708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.037995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.038025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.038317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.038352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.038565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.038575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.038677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.038686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.038893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.038904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.039062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.039072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.039305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.039316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.039527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.039538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.039626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.039636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.039823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.039860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.040000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.040035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.040343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.040376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.040648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.040681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.040888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.040920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.041143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.041174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.041413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.041428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.041526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.041542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.041707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.041737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.041988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.042018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.042304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.042344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.042613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.042628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.042740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.042752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.042981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.042991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.043240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.043270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.043523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.043555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.043711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.043741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.043994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.044024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.044234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.044264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.044560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.044572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.044733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.044743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.044910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.044940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.045135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.045166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.045378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.045415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.045627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.045638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.045781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.045791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.045903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.247 [2024-07-24 18:22:19.045913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.247 qpair failed and we were unable to recover it. 00:27:26.247 [2024-07-24 18:22:19.046126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.046136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.046297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.046307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.046506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.046537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.046763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.046794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.046934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.046963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.047106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.047142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.047278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.047288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.047520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.047530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.047630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.047639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.047723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.047747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.047960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.047989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.048241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.048272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.048562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.048592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.048739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.048769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.049020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.049049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.049369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.049399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.049604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.049635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.049831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.049860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.050186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.050216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.050428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.050459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.050668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.050699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.050906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.050937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.051104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.051133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.051428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.051458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.051698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.051730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.051935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.051965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.052286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.052315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.052548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.052580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.052834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.052864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.052975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.052985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.053158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.053187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.053512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.248 [2024-07-24 18:22:19.053543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.248 qpair failed and we were unable to recover it. 00:27:26.248 [2024-07-24 18:22:19.053789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.053861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.054118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.054153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.054433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.054465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.054666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.054698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.054907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.054937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.055192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.055222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.055509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.055541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.055846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.055875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.056154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.056169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.056332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.056346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.056606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.056622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.056860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.056876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.056983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.056998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.057240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.057279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.057583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.057615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.057831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.057862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.058074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.058103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.058355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.058385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.058618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.058633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.058786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.058801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.059042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.059071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.059376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.059407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.059630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.059645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.059885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.059915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.060099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.060130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.060326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.060364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.060515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.060530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.060755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.060769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.060937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.060953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.061133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.061163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.061372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.061402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.061723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.061755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.061924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.061954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.062167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.062197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.062414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.062429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.062675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.062706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.062988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.063018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.249 [2024-07-24 18:22:19.063233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.249 [2024-07-24 18:22:19.063263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.249 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.063542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.063575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.063778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.063808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.064091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.064121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.064343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.064373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.064662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.064677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.064899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.064914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.065078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.065092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.065362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.065397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.065555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.065586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.065814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.065844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.065988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.066018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.066147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.066177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.066445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.066455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.066610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.066620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.066710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.066720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.066879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.066891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.067068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.067098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.067426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.067457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.067757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.067788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.068062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.068092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.068298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.068327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.068589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.068620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.068826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.068856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.069040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.069070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.069348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.069378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.069598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.069629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.069829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.069859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.070110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.070140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.070415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.070445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.070771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.070803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.071035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.071064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.071286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.071315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.071590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.071622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.071905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.071935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.072182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.072212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.072463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.072507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.072716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.072747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.250 qpair failed and we were unable to recover it. 00:27:26.250 [2024-07-24 18:22:19.072949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.250 [2024-07-24 18:22:19.072979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.073306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.073335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.073561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.073593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.073751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.073780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.073984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.074014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.074241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.074276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.074518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.074549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.074750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.074780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.074939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.074969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.075197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.075226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.075509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.075519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.075698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.075708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.075889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.075898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.075996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.076006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.076188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.076198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.076380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.076391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.076551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.076583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.076781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.076811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.076945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.076975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.077311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.077341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.077529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.077560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.077744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.077774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.077980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.078011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.078287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.078323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.078479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.078494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.078664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.078674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.078797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.078807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.079033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.079043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.079285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.079294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.079561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.079592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.079896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.079926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.080200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.080230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.080437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.080468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.080684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.080714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.080915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.080945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.081168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.081199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.081415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.081445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.081671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.081703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.251 qpair failed and we were unable to recover it. 00:27:26.251 [2024-07-24 18:22:19.081982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.251 [2024-07-24 18:22:19.082013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.082307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.082336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.082559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.082591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.082871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.082901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.083151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.083181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.083424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.083453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.083620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.083652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.083929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.083969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.084150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.084160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.084396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.084405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.084571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.084581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.084732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.084741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.084914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.084944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.085220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.085250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.085475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.085526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.085668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.085699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.085855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.085885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.086083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.086113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.086417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.086447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.086722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.086754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.086939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.086969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.087277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.087307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.087548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.087579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.087743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.087774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.087918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.087948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.088234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.088264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.088517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.088527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.088621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.088631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.088728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.088738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.088847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.088857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.089032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.089042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.089235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.089265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.089392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.252 [2024-07-24 18:22:19.089421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.252 qpair failed and we were unable to recover it. 00:27:26.252 [2024-07-24 18:22:19.089631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.089662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.089880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.089911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.090170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.090200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.090402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.090433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.090709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.090740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.090902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.090949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.091228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.091258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.091535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.091568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.091824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.091854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.091997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.092028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.092308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.092338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.092650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.092681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.092831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.092862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.093136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.093167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.093402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.093415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.093597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.093608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.093784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.093813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.093966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.093996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.094284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.094314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.094564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.094575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.094789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.094819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.094968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.094998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.095207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.095236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.095519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.095551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.095754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.095786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.096050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.096080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.096377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.096407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.096693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.096724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.096873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.096904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.097100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.097130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.097319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.097349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.097652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.097683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.097915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.097945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.098227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.098258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.098414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.098445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.098619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.098630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.098747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.098757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.253 [2024-07-24 18:22:19.098964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.253 [2024-07-24 18:22:19.098974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.253 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.099129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.099139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.099298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.099329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.099546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.099578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.099798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.099828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.100086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.100117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.100261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.100296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.100552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.100580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.100789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.100819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.101017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.101048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.101349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.101359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.101632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.101663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.101963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.101994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.102136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.102166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.102431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.102461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.102676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.102686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.102796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.102807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.102971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.102983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.103103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.103132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.103332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.103363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.103561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.103593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.103827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.103837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.103982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.103993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.104173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.104202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.104408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.104438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.104684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.104716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.104925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.104955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.105191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.105202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.105483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.105522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.105732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.105762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.105916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.105947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.106245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.106275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.106510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.106542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.106700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.106730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.106852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.106882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.107169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.107200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.107484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.107543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.107698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.107729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.254 [2024-07-24 18:22:19.107937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.254 [2024-07-24 18:22:19.107966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.254 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.108210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.108220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.108375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.108385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.108498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.108509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.108627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.108637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.108822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.108852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.109086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.109116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.109373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.109402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.109646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.109678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.109884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.109915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.110108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.110138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.110339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.110369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.110524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.110555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.110704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.110735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.110858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.110888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.111065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.111095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.111368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.111378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.111606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.111637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.111829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.111859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.112017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.112055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.112257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.112286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.112502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.112533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.112676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.112706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.112942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.112972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.113185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.113216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.113424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.113454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.113622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.113653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.113810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.113841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.114045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.114075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.114366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.114396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.114686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.114717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.114976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.115006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.115330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.115360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.115670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.115702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.115864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.115895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.116085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.116115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.116415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.116445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.116647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.255 [2024-07-24 18:22:19.116679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.255 qpair failed and we were unable to recover it. 00:27:26.255 [2024-07-24 18:22:19.116886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.116916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.117074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.117105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.117408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.117438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.117583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.117615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.117823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.117853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.118163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.118193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.118415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.118425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.118643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.118654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.118825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.118855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.119117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.119146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.119417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.119448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.119686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.119717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.119886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.119916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.120111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.120141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.120435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.120466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.120725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.120736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.120899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.120910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.121070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.121080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.121258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.121306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.121532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.121562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.121839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.121870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.122050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.122087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.122363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.122393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.122609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.122641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.122809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.122839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.123039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.123069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.123305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.123335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.123534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.123566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.123723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.123753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.123958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.123988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.124220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.124250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.124424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.124454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.124663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.124694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.124847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.124877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.125004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.125035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.125324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.125355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.125605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.125636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.125843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.256 [2024-07-24 18:22:19.125874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.256 qpair failed and we were unable to recover it. 00:27:26.256 [2024-07-24 18:22:19.126146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.126184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.126416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.126426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.126599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.126609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.126811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.126840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.127002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.127033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.127270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.127299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.127556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.127587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.127802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.127832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.128040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.128070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.128325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.128355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.128619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.128630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.128836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.128867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.129022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.129052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.129343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.129372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.129579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.129610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.129849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.129879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.130019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.130050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.130245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.130274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.130502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.130513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.130649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.130659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.130797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.130807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.130905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.130916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.131015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.131026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.131238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.131252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.131409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.131419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.131623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.131655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.131788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.131817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.131964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.131994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.132214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.132244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.132432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.132462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.132672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.132703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.132908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.257 [2024-07-24 18:22:19.132938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.257 qpair failed and we were unable to recover it. 00:27:26.257 [2024-07-24 18:22:19.133297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.133327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.133584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.133616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.133821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.133851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.134075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.134104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.134252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.134273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.134451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.134481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.134642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.134674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.134894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.134924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.135063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.135092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.135352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.135362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.135518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.135528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.135646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.135676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.135881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.135912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.136165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.136195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.136423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.136433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.136651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.136662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.136873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.136883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.137023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.137065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.137361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.137391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.137548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.137579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.137780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.137790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.138002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.138032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.138253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.138283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.138550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.138581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.138733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.138764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.138964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.138993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.139319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.139349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.139549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.139559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.139668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.139678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.139833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.139843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.139947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.139957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.140060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.140072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.140344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.140374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.140575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.140606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.140767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.140797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.141017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.141047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.141323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.141353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.258 [2024-07-24 18:22:19.141483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.258 [2024-07-24 18:22:19.141523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.258 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.141710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.141740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.141937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.141966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.142241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.142277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.142549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.142560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.142635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.142646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.142800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.142810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.142987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.143018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.143241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.143251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.143410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.143420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.143626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.143659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.143822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.143854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.144006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.144038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.144255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.144286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.144587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.144619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.144814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.144844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.144983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.145013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.145235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.145265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.145478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.145540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.145769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.145804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.145979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.146009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.146282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.146313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.146570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.146580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.146679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.146688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.146797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.146808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.146902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.146912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.147054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.147064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.147273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.147303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.147512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.147544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.147818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.147828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.148088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.148098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.148191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.148201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.148372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.148383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.148545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.148556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.148728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.148741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.148959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.148988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.149198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.149228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.149438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.149467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.259 [2024-07-24 18:22:19.149746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.259 [2024-07-24 18:22:19.149777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.259 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.149988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.150019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.150151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.150182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.150466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.150505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.150743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.150753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.150940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.150969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.151136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.151167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.151398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.151427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.151630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.151640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.151733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.151743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.151903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.151914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.152018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.152028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.152200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.152211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.152446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.152456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.152588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.152599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.152841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.152851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.153059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.153090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.153295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.153325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.153529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.153540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.153724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.153754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.154034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.154064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.154263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.154293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.154555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.154587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.154854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.154864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.154959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.154969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.155165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.155176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.155335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.155345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.155562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.155593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.155851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.155881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.156126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.156156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.156434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.156464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.156627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.156658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.156864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.156874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.157094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.157124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.157335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.157365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.157690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.157701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.157796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.157809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.157996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.158017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.260 qpair failed and we were unable to recover it. 00:27:26.260 [2024-07-24 18:22:19.158233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.260 [2024-07-24 18:22:19.158243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.158534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.158565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.158846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.158876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.159159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.159189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.159395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.159425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.159703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.159713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.159876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.159886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.160070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.160100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.160358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.160389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.160612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.160643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.160903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.160933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.161209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.161239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.161516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.161548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.161763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.161793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.162084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.162113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.162377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.162386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.162648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.162671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.162793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.162803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.163018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.163048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.163313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.163344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.163570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.163601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.163863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.163894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.164167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.164197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.164425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.164456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.164771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.164782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.164901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.164911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.165068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.165079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.165186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.165196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.165439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.165469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.165738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.165771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.165928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.165959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.166253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.166283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.166484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.166535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.166683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.166693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.166864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.166894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.167152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.167182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.167454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.167464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.167627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.167638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-24 18:22:19.167751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-24 18:22:19.167765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.167981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.167991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.168219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.168230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.168405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.168415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.168563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.168573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.168734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.168745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.168919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.168949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.169097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.169127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.169376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.169406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.169636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.169646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.169818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.169828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.169986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.169996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.170249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.170279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.170513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.170545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.170761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.170791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.171052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.171063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.171186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.171216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.171519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.171551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.171848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.171878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.172055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.172085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.172295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.172324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.172537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.172566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.172737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.172748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.172924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.172954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.173117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.173147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.173399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.173430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.173723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.173734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.173853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.173884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.174096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.174126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.174407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.174437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.174661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.174692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.174832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.174842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-24 18:22:19.174943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-24 18:22:19.174953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.175196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.175225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.175416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.175446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.175756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.175789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.176025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.176055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.176315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.176345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.176605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.176638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.176786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.176797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.176974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.176989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.177173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.177183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.177358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.177388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.177547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.177579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.177851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.177881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.178028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.178058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.178270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.178300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.178534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.178566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.178793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.178824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.179046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.179076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.179340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.179371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.179629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.179639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.179825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.179855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.180010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.180040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.180311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.180341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.180610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.180642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.180920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.180951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.181203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.181233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.181557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.181589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.181790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.181820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.182048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.182078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.182286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.182315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.182462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.182502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.182718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.182748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.183019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.183050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.183357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.183388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.183685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.183716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.183998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.184070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.184409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.184441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.184715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-24 18:22:19.184732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-24 18:22:19.184928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.184958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.185100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.185130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.185287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.185318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.185440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.185469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.185710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.185742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.186004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.186034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.186241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.186271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.186418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.186449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.186701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.186734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.186949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.186979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.187205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.187245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.187537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.187569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.187735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.187765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.187977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.188007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.188322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.188352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.188680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.188711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.188995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.189026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.189253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.189295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.189474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.189489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.190887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.190918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.191195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.191213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.191434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.191464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.191764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.191795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.192008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.192038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.192274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.192304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.192585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.192616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.192904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.192920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.193165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.193180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.193348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.193363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.193639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.193655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.193826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.193842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.193962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.193991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.194305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.194335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.194543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.194574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.194801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.194831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.195091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.195121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.195397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.195427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-24 18:22:19.195756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-24 18:22:19.195792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.196034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.196063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.196299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.196328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.196585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.196601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.196712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.196742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.196981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.197010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.197344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.197374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.197650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.197666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.197854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.197869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.198097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.198112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.198315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.198330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.198514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.198530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.198717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.198746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.198938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.198968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.199237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.199267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.199544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.199583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.199751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.199766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.199947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.199976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.200222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.200251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.200458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.200488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.200693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.200724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.201007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.201037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.201254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.201284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.201472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.201515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.201827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.201842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.202080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.202095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.202327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.202342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.202598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.202638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.202849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.202878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.203092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.203123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.203414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.203444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.203741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.203758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.203878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.203894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.204149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.204178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.204388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.204418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.204618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.204662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.204858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.204873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.205090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.205119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-24 18:22:19.205403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-24 18:22:19.205433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.205714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.205730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.205984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.206002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.206131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.206161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.206377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.206407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.206619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.206652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.206932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.206948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.207205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.207237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.207514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.207546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.207760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.207790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.207955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.207985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.208294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.208324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.208570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.208587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.208760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.208790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.208964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.208994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.209310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.209339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.209567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.209600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.209794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.209824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.210111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.210141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.210459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.210488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.210752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.210784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.210977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.211008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.211233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.211264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.211504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.211536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.211835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.211865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.212068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.212099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.212430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.212461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.212667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.212698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.212902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.212932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.213147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.213179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.213444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.213475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.213680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.213697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-24 18:22:19.213883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-24 18:22:19.213914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.214152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.214182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.214472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.214530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.214733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.214763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.215057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.215087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.215379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.215410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.215693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.215710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.215948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.215978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.216271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.216301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.216559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.216591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.216834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.216870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.217081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.217111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.217428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.217458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.217704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.217736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.218009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.218025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.218261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.218277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.218509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.218525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.218782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.218797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.218908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.218924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.219163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.219192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.219406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.219437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.219718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.219750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.219906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.219921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3562925 Killed "${NVMF_APP[@]}" "$@" 00:27:26.267 [2024-07-24 18:22:19.220160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.220193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.220405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.220420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.220661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.220679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:26.267 [2024-07-24 18:22:19.220806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.220838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.221056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:26.267 [2024-07-24 18:22:19.221088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.267 [2024-07-24 18:22:19.221399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.221432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.221611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.221643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:26.267 [2024-07-24 18:22:19.221796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.221814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.222002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.267 [2024-07-24 18:22:19.222034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.222193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.222223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.222532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.222564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.222722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-24 18:22:19.222759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-24 18:22:19.222971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.223002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.223188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.223218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.223487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.223530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.223766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.223783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.224032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.224066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.224358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.224388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.224624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.224656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.224876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.224907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.225104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.225135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.225347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.225377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.225599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.225631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.225911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.225928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.226101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.226119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.226418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.226450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.226600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.226631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.226797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.226828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.226969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.227000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.227336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.227366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.227636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.227667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.227879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.227896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.228029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.228059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.228217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.228247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3563796 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3563796 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:26.268 [2024-07-24 18:22:19.229708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.229746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 3563796 ']' 00:27:26.268 [2024-07-24 18:22:19.229881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.229900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.268 [2024-07-24 18:22:19.230111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.230147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:26.268 [2024-07-24 18:22:19.230415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.230449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.268 [2024-07-24 18:22:19.230700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.230719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.230908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:26.268 [2024-07-24 18:22:19.230942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 18:22:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.268 [2024-07-24 18:22:19.231262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.231295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.231582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.231615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.232518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.232544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.232811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.232828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.233009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.233025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-24 18:22:19.233310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-24 18:22:19.233328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.233599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.233617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.233745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.233761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.233965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.233981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.234113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.234129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.234309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.234325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.234506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.234525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.234660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.234675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.234785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.234802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.234945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.234961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.235191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.235207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.235460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.235477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.235650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.235695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.235928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.235971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.236277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.236312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.236530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.236544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.236726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.236738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.236865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.236876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.236976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.236986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.237092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.237102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.237285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.237296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.237446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.237457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.237646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.237658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.237830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.237841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.237942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.237953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.238057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.238067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.238164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.238175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.238401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.238412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.238582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.238595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.238748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.238760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.238981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.238992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.239222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.239233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.239419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.239430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.239608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.239619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.239776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.239787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.239927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.239937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.240059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.240070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.240269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.240279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-24 18:22:19.240457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-24 18:22:19.240468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.240591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.240604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.240709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.240720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.240829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.240840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.241006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.241017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.241252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.241263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.241427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.241438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.241620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.241632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.241866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.241879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.242101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.242111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.242299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.242311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.242583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.242594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.242704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.242715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.242927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.242937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.243111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.243122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.243325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.243335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.243548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.243558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.243760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.243771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.243945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.243955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.244210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.244220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.244455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.244465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.244732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.244745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.244918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.244930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.245036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.245046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.245207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.245219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.245458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.245468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.245649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.245661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.245822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.245833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.245995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.246006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.246192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.246202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.246349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.246363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.246474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.246484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.246661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.246672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.246773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.246783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.246937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.246947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.247213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.247224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.247388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.247399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.247568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-24 18:22:19.247580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-24 18:22:19.247749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.247760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.247977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.247987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.248083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.248093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.248251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.248261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.248425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.248435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.248550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.248562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.248656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.248667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.248777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.248787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.249034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.249045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.249353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.249363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.249608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.249619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.249721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.249731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.249895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.249905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.250063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.250073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.250317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.250327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.250485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.250505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.250631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.250641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.250792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.250803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.250882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.250893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.250973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.250984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.251103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.251113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.251223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.251234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.251502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.251513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.251617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.251628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.251784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.251795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.251917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.251927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.252149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.252159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.252322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.252333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.252573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.252584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.252678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.252689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.252800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.252811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.252969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.252979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-24 18:22:19.253080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-24 18:22:19.253093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.253340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.253350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.253451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.253461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.253761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.253772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.253941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.253951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.254136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.254146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.254397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.254407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.254657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.254667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.254819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.254829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.254949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.254960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.255045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.255055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.255230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.255240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.255412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.255422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.255581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.255591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.255696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.255707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.255802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.255812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.255980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.255990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.256083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.256093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.256300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.256310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.256499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.256510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.256680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.256690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.256876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.256886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.257109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.257120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.257302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.257312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.257462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.257473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.257582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.257592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.257695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.257705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.257943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.257953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.258110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.258120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.258337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.258347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.258579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.258590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.258736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.258746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.258864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.258875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.258967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.258977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.259128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.259139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.259375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.259385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.259558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.259569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.259637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.259647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-24 18:22:19.259861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-24 18:22:19.259871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.260023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.260033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.260218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.260231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.260376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.260386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.260475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.260485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.260650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.260661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.260817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.260827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.260938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.260948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.261873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.261883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.262955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.262966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.263940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.263950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.264043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.264054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.264151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.264161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.264246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.264256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.264346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.264356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.264446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-24 18:22:19.264457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-24 18:22:19.264553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.264564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.264710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.264720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.264796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.264806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.264911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.264921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.265988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.265998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.266091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.266101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.266247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.266257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.266420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.266430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.266511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.266522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.266666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.266676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.266789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.266799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.266893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.266903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.267929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.267939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.268923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-24 18:22:19.268933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-24 18:22:19.269081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.269957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.269967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.270945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.270955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.271984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.271994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.272091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.272101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.272202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.272212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.272292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.272302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.272448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.272458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.272613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.272624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.272707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.272717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-24 18:22:19.272809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-24 18:22:19.272819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.272921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.272931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.273114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.273124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.273208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.273218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.273423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.273433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.273663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.273674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.273817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.273827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.273919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.273929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.274012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.274021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.274226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.274236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.274379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.274389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.274462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.274472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.274550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.274560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.274778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.274790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.274882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.274892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.275974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.275984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.276962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.276972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.277251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.277261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.277367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.277377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.277465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.277475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.277598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-24 18:22:19.277609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-24 18:22:19.277757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.277768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.277841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.277851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.277932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.277942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.278897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.278990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.279093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.279315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.279418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.279515] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:27:26.277 [2024-07-24 18:22:19.279571] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.277 [2024-07-24 18:22:19.279667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.279773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.279865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.279968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.279976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.280963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.280973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.281886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.281897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.282076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-24 18:22:19.282087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-24 18:22:19.282178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.282264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.282360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.282463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.282561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.282733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.282832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.282928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.282938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.283144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.283155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.283263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.283273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.283453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.283464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.283549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.283560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.283709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.283720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.283815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.283825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.283915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.283926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.284924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.284935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.285983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.285993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.286088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.286099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.286186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.286197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.286277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.286287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.286378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.286388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.286468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.286478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-24 18:22:19.286583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-24 18:22:19.286594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.286668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.286679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.286762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.286773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.286856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.286867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.286946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.286957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.287909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.287997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.288974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.288989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.289950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.289961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.290102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.290114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.290287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.290297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.290387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.290397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.290610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.290622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.290766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.290776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.290878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.290888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.290971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.290981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.291083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.291094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-24 18:22:19.291164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-24 18:22:19.291175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.291950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.291960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.292971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.292981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.293936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.293946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.294096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.294105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.294188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.294198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.294350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.294361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.294505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.294516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.294666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-24 18:22:19.294676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-24 18:22:19.294853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.294864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.295923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.295933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.296101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.296112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.296257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.296267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-24 18:22:19.296353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-24 18:22:19.296363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-24 18:22:19.296453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-24 18:22:19.296462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-24 18:22:19.296571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-24 18:22:19.296598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-24 18:22:19.296696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-24 18:22:19.296713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-24 18:22:19.296892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-24 18:22:19.296908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.297968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.297983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.298920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.298931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.299944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.299955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.300940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.300950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.301031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-24 18:22:19.301041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-24 18:22:19.301123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.301218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.301375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.301473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.301564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.301710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.301798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.301902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.301913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.302821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.302831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.303828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.303838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.304957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.304967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.305107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.305117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.305202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.305212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.305313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.305323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.567 [2024-07-24 18:22:19.305414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.567 [2024-07-24 18:22:19.305424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.305521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.305533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.305624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.305635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.305775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.305785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.305922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.305932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.306960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.306971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.307959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.307969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.308972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.308982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.309062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.309072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.309146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.309156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.309244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.309254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.568 qpair failed and we were unable to recover it. 00:27:26.568 [2024-07-24 18:22:19.309332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.568 [2024-07-24 18:22:19.309343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.309416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.309426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.309508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.309519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.309593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.309603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.309682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.309692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.309770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.309785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.309929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.309939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.310876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.310887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.311916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.311925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.312984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.312994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.313077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.313087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.569 [2024-07-24 18:22:19.313296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.569 [2024-07-24 18:22:19.313307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.569 qpair failed and we were unable to recover it. 00:27:26.569 [2024-07-24 18:22:19.313402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.313412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.313501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.313511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.313584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.313594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.313665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.313676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.313743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.313753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.313838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.313848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.313922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.313932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.314989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.314999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.315981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.315992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.316084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.316094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.316165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.316176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.316265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.316275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.316358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.316367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.316441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.316451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.570 qpair failed and we were unable to recover it. 00:27:26.570 [2024-07-24 18:22:19.316609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.570 [2024-07-24 18:22:19.316620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.316699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.316709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.316779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.316789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.316861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.316871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.316953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.316963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.317911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.317995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.318844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.318853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.319900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.319910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.320054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.320064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.320140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.320150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.320223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.320233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.320323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.320332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.320409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.320419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.320559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.571 [2024-07-24 18:22:19.320570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.571 qpair failed and we were unable to recover it. 00:27:26.571 [2024-07-24 18:22:19.320645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.320655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.320742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.320751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.320836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.320845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.320927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.320937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.321943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.321952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.322093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.322103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.322257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.322267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.322406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.322415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.322475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.322484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.322576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.322586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.322769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.322778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.322850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.322862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.323924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.323934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.324006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.324016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.324075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.324085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.324153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.324163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.324251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.324261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.324404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.324413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.324484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.572 [2024-07-24 18:22:19.324639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.572 [2024-07-24 18:22:19.324649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.572 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.324743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.324752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.324825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.324835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.324910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.324920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.325955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.325965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.326883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.326893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.327949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.327959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.328024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.328033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.328116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.328126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.328267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.328276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.328413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.328423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.573 [2024-07-24 18:22:19.328498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.573 [2024-07-24 18:22:19.328508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.573 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.328653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.328663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.328773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.328782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.328854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.328863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.328935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.328945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.329880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.329889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.330916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.330925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.331832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.331842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.332116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.332126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.332276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.332286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.332360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.332369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.332442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.332451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.332540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.332550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.574 qpair failed and we were unable to recover it. 00:27:26.574 [2024-07-24 18:22:19.332690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.574 [2024-07-24 18:22:19.332699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.332872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.332882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.332966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.332975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.333874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.333884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.334965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.334974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.335912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.335922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.336062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.336074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.336172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.336181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.336265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.336274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.336352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.336362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.575 [2024-07-24 18:22:19.336442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.575 [2024-07-24 18:22:19.336451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.575 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.336541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.336551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.336633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.336642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.336716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.336726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.336812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.336822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.336923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.336934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.337968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.337977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.338966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.338976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.339969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.339979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.340052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.340062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.340156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.340166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.340248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.340397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.340407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.340519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.340531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.340620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.340629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.576 qpair failed and we were unable to recover it. 00:27:26.576 [2024-07-24 18:22:19.340787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.576 [2024-07-24 18:22:19.340797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.340871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.340881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.340950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.340960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.341025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.341035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.341191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.341201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.341502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.341512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.341656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.341666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.341824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.341835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.341910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.341920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.342020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.342030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.342279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.342288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.342436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.342446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.342723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.342733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.342825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.342835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.343040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.343050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.343139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.343149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.343381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.343391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.343534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.343544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.343636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.343645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.343724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.343734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.343965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.343975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.344137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.344146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.344258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.344268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.344349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.344359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.344534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.344544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.344644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.344654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.344811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.344821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.344911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.344921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.345126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.345136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.345410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.345420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.345578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.345588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.345671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.345681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.345906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.345917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.346072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.346083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.346234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.346244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.346426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.346436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.346536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.346547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.577 qpair failed and we were unable to recover it. 00:27:26.577 [2024-07-24 18:22:19.346641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.577 [2024-07-24 18:22:19.346651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.346856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.346870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.346954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.346963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.347122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.347131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.347291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.347300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.347502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.347512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.347739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.347749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.347898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.347908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.348136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.348146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.348374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.348384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.348614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.348624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.348818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.348827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.348985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.348994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.349244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.349254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.349459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.349469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.349705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.349715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.349873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.349882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.349971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.349981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.350207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.350217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.350391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.350401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.350558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.350568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.350729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.350739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.350894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.350903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.351160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.351170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.351326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.351335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.351559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.351569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.351798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.351808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.351978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.351987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.352255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.352265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.352413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.352422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.352646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.352656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.352812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.352821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.352928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.352938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.353076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.353086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.353240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.353250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.353334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.353343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.353522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.353532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.353620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.578 [2024-07-24 18:22:19.353630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.578 qpair failed and we were unable to recover it. 00:27:26.578 [2024-07-24 18:22:19.353764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.353773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.353929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.353938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.354085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.354094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.354351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.354362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.354525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.354535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.354690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.354699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.354852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.354862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.355081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.355091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.355176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.355186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.355437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.355446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.355625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.355635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.355826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.355836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.355977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.355986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.356221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.356230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.356479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.356489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.356654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.356664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.356870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.356880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.357044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.357054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.357329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.357339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.357509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.357519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.357682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.357691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.357897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.357907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.358078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.358088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.358319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.358329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.358569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.358578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.358731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.358741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.358846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.358856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.359015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.359025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.359197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.359206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.359368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.359377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 [2024-07-24 18:22:19.359371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.359464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.359474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.579 [2024-07-24 18:22:19.359636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.579 [2024-07-24 18:22:19.359647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.579 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.359798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.359808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.359892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.359902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.360132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.360143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.360419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.360429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.360610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.360622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.360766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.360776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.360863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.360873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.361108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.361118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.361264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.361273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.361499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.361510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.361715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.361725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.361881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.361891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.362042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.362052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.362207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.362217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.362372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.362382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.362523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.362533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.362726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.362736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.362900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.362910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.363133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.363143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.363366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.363375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.363552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.363563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.363750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.363760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.363956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.363965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.364115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.364125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.364303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.364315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.364601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.364612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.364836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.364847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.365078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.365089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.365297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.365307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.365587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.365598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.365755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.365764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.365937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.365947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.366120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.366130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.366225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.580 [2024-07-24 18:22:19.366234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.580 qpair failed and we were unable to recover it. 00:27:26.580 [2024-07-24 18:22:19.366373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.366383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.366596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.366606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.366832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.366843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.367070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.367081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.367339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.367349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.367521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.367531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.367724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.367734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.367916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.367926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.368064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.368074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.368228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.368238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.368387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.368397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.368634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.368645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.368872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.368882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.369038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.369048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.369257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.369267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.369476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.369485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.369697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.369708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.369977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.370013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.370296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.370322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.370552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.370570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.370817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.370832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.370954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.370969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.371134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.371148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.371391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.371403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.371620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.371630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.371871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.371881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.372040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.372049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.372218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.372227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.372458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.372467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.372724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.372733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.372892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.372902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.581 [2024-07-24 18:22:19.373063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.581 [2024-07-24 18:22:19.373073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.581 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.373322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.373332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.373561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.373571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.373731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.373741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.373943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.373952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.374188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.374198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.374370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.374380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.374532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.374542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.374760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.374769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.374956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.374966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.375113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.375123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.375287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.375297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.375532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.375543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.375709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.375718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.375914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.375924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.376132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.376142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.376342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.376352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.376441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.376450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.376627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.376637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.376815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.376825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.377038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.377048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.377201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.377211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.377364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.377373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.377525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.377535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.377713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.377723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.377861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.377871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.378075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.378086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.378267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.378277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.378430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.378439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.378611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.378621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.378721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.378731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.378970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.378980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.379153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.379162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.379411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.379421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.379518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.379528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.379669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.379679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.582 [2024-07-24 18:22:19.379953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.582 [2024-07-24 18:22:19.379963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.582 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.380219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.380228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.380404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.380413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.380620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.380631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.380854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.380864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.381033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.381042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.381258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.381267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.381494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.381503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.381641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.381651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.381819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.381829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.381927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.381937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.382179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.382188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.382392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.382402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.382483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.382496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.382635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.382644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.382809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.382819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.382964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.382973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.383177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.383186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.383275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.383285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.383434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.383443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.383593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.383603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.383743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.383753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.383908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.383917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.384170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.384179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.384410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.384419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.384631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.384641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.384847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.384857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.385083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.385093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.385242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.385251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.385487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.385500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.385724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.583 [2024-07-24 18:22:19.385736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.583 qpair failed and we were unable to recover it. 00:27:26.583 [2024-07-24 18:22:19.385901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.385911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.386138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.386148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.386324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.386334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.386431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.386440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.386597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.386607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.386846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.386856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.387018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.387027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.387231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.387240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.387392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.387402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.387575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.387585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.387811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.387821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.388048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.388058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.388304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.388313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.388416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.388426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.388606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.388616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.388777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.388787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.388928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.388937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.389118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.389128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.389355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.389365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.389511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.389520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.389729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.389738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.389961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.389971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.390200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.390209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.390365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.390375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.390550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.390560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.390815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.390825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.390986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.390995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.391149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.391159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.391294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.391304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.391476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.584 [2024-07-24 18:22:19.391486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.584 qpair failed and we were unable to recover it. 00:27:26.584 [2024-07-24 18:22:19.391643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.391652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.391867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.391877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.392108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.392118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.392333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.392343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.392573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.392583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.392744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.392754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.392966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.392976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.393127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.393136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.393380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.393390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.393587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.393599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.393785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.393794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.393979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.393989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.394140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.394149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.394314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.394323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.394499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.394508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.394662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.394671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.394874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.394883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.395099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.395109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.395182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.395192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.395415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.395424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.395644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.395654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.395886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.395896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.396157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.396167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.396399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.396408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.396615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.396625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.396776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.396785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.397002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.397012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.397247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.397256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.397460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.585 [2024-07-24 18:22:19.397471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.585 qpair failed and we were unable to recover it. 00:27:26.585 [2024-07-24 18:22:19.397710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.397722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.397951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.397962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.398121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.398131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.398389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.398400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.398630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.398642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.398745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.398755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.398904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.398915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.399150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.399162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.399393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.399404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.399591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.399603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.399810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.399821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.400048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.400059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.400289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.400300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.400557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.400570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.400776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.400788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.400932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.400943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.401162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.401174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.401409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.401420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.401681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.401693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.401850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.401861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.402112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.402126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.402306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.402317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.402404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.402415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.402557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.402568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.402809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.402821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.403054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.403065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.403323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.403335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.403509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.403520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.403725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.403735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.403916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.403927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.404076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.404088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.404299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.404310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.404506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.404518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.404739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.404752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.405009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.405021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.405237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.405248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.405478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.405489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.586 [2024-07-24 18:22:19.405644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.586 [2024-07-24 18:22:19.405654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.586 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.405859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.405869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.406039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.406049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.406225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.406236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.406372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.406382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.406587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.406598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.406840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.406850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.407061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.407071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.407308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.407317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.407474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.407483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.407619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.407629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.407861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.407870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.408077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.408086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.408285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.408294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.408503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.408514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.408675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.408685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.408820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.408830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.408989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.408999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.409222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.409231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.409480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.409489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.409731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.409741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.409894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.409903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.410122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.410131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.410271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.410283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.410438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.410447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.410662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.410671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.410852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.410862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.411035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.411045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.411141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.411151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.411381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.411390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.411565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.411574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.411800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.411810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.411955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.411964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.587 qpair failed and we were unable to recover it. 00:27:26.587 [2024-07-24 18:22:19.412124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.587 [2024-07-24 18:22:19.412133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.412349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.412358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.412515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.412525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.412730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.412739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.412833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.412843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.413006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.413015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.413169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.413178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.413254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.413264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.413421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.413430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.413657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.413667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.413806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.413816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.413994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.414003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.414250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.414260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.414464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.414474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.414582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.414591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.414748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.414758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.414846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.414855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.415087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.415097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.415240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.415250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.415424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.415433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.415604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.415614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.415817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.415827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.415979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.415989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.416096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.416106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.416193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.416203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.416354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.416363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.416473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.416483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.416637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.416647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.416803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.416812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.416997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.417006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.417163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.417174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.417266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.588 [2024-07-24 18:22:19.417275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.588 qpair failed and we were unable to recover it. 00:27:26.588 [2024-07-24 18:22:19.417479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.417489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.417657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.417667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.417924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.417933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.418147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.418156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.418402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.418412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.418557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.418567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.418780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.418789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.418889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.418899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.418991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.419001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.419143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.419153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.419252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.419261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.419512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.419522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.419614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.419624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.419800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.419810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.419963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.419973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.420072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.420081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.420324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.420334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.420471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.420481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.420695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.420734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.420911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.420927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.421166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.421181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.421409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.421424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.421679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.421695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.421861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.421876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.422147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.422158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.422445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.422469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.422672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.422688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.422855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.422870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.422987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.423002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.423118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.423133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.423288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.423302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.423567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.423583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.423851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.423866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.424117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.424132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.424390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.424401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.424569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.424579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.589 [2024-07-24 18:22:19.424782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.589 [2024-07-24 18:22:19.424792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.589 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.424961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.424970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.425108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.425118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.425228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.425237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.425487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.425501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.425603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.425613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.425782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.425791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.426021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.426031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.426181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.426191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.426270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.426279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.426457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.426466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.426719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.426729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.426830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.426839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.427076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.427085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.427232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.427242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.427473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.427483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.427711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.427722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.427947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.427956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.428160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.428169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.428447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.428456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.428681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.428691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.428939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.428948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.429195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.429205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.429435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.429445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.429609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.429619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.429823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.429832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.430034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.430044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.430197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.430206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.430441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.430451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.430615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.430626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.430717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.590 [2024-07-24 18:22:19.430727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.590 qpair failed and we were unable to recover it. 00:27:26.590 [2024-07-24 18:22:19.430939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.430948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.431181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.431191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.431360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.431369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.431597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.431606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.431754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.431763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.431991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.432001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.432232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.432241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.432400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.432409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.432636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.432647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.432803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.432812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.432964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.432973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.433184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.433193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.433342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.433352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.433454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.433464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.433669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.433680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.433852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.433861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.434015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.434025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.434167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.434177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.434387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.434397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.434482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.434497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.434660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.434670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.434885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.434894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.435119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.435128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.435314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.435323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.435427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.435436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.435594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.435605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.435694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.435704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.435931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.435942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.436132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.436142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.436238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.436248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.436400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.436411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.436645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.436656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.436887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.436897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.437060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.437069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.437085] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.591 [2024-07-24 18:22:19.437112] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.591 [2024-07-24 18:22:19.437119] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.591 [2024-07-24 18:22:19.437125] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.591 [2024-07-24 18:22:19.437130] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.591 [2024-07-24 18:22:19.437316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.437326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.591 qpair failed and we were unable to recover it. 00:27:26.591 [2024-07-24 18:22:19.437454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:26.591 [2024-07-24 18:22:19.437590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.591 [2024-07-24 18:22:19.437600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.437543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:26.592 [2024-07-24 18:22:19.437668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:26.592 [2024-07-24 18:22:19.437805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.437669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:26.592 [2024-07-24 18:22:19.437816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.437961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.437971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.438226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.438236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.438504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.438515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.438721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.438732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.438994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.439004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.439252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.439263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.439421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.439431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.439635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.439645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.439751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.439761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.439856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.439866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.440021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.440031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.440239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.440252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.440436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.440446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.440688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.440699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.440906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.440916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.441066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.441077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.441295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.441306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.441540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.441550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.441708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.441718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.441935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.441945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.442106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.442115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.442370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.442380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.442589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.442599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.442861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.442871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.443117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.443127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.443337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.443347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.443564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.443575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.443743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.443753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.444004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.444014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.444208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.444218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.444449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.444459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.444668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.444679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.444820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.444829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.445071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.445081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.445182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.592 [2024-07-24 18:22:19.445192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.592 qpair failed and we were unable to recover it. 00:27:26.592 [2024-07-24 18:22:19.445296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.445305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.445534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.445544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.445772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.445784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.446043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.446065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.446299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.446314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.446566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.446583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.446833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.446848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.447075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.447090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.447308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.447323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.447558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.447575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.447743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.447759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.448037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.448052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.448321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.448336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.448579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.448597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.448764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.448780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.448999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.449014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.449232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.449247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.449479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.449500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.449767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.449783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.449970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.449986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.450209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.450228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.450402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.450420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.450684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.450695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.450933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.450944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.451125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.451136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.451366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.451377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.451528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.451539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.451762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.451772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.451981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.451991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.452280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.452290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 A controller has encountered a failure and is being reset. 00:27:26.593 [2024-07-24 18:22:19.452443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.452471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.452753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.452772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.453047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.453061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.453307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.453323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.453476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.453495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.453708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.453723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.453981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.453997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.454232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.593 [2024-07-24 18:22:19.454247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.593 qpair failed and we were unable to recover it. 00:27:26.593 [2024-07-24 18:22:19.454495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.454511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.454633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.454648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.454810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.454825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.455061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.455074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.455228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.455239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.455389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.455402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.455582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.455592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.455732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.455743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.455977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.455986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.456213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.456223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.456474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.456484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.456647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.456657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.456891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.456901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.457131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.457141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.457347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.457357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.457562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.457573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.457778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.457788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.458017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.458027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.458183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.458193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.458375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.458385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.458532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.458543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.458695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.458705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.458912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.458922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.459159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.459169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.459372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.459382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.459607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.459617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.459836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.594 [2024-07-24 18:22:19.459847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.594 qpair failed and we were unable to recover it. 00:27:26.594 [2024-07-24 18:22:19.460076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.460087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.460293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.460304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.460511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.460522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.460731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.460741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.460893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.460903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.461076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.461087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.461350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.461360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.461572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.461583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.461752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.461763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.461929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.461940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.462191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.462201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.462429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.462439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.462680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.462692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.462938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.462949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.463109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.463120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.463329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.463340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.463500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.463511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.463604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.463614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.463775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.463787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.463887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.463897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.464049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.464059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.464284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.464295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.464458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.464468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.464623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.464634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.464849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.464859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.465091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.465102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.465375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.465386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.465547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.465558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.465785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.465796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.466011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.466022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.466227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.466238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.466394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.466405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.466578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.466589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.466792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.466803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.467047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.467058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.467290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.467300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.467476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.467487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.595 [2024-07-24 18:22:19.467724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.595 [2024-07-24 18:22:19.467735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.595 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.467894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.467904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.468139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.468150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.468424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.468436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.468652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.468664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.468901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.468912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.469083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.469093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.469325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.469336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.469520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.469532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.469680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.469692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.469840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.469851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.469996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.470006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.470160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.470171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.470334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.470345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.470563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.470574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.470850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.470862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.471099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.471111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.471214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.471224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.471383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.471394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.471618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.471629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.471859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.471871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.472084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.472099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.472240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.472250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.472488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.472509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.472723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.472734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.472991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.473003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.473258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.473270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.473424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.473435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.473663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.473675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.473896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.473907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.474158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.474168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.474317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.474327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.474563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.474573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.474661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.474670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.474899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.474908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.475131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.475140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.475348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.475358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.475585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.475595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.596 [2024-07-24 18:22:19.475824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.596 [2024-07-24 18:22:19.475833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.596 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.476092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.476102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.476264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.476274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.476500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.476510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.476685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.476695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.476864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.476874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.476957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.476967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.477200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.477210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.477281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.477291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.477439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.477449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.477600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.477611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.477766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.477776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.478020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.478030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.478243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.478253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.478354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.478363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.478586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.478596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.478801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.478812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.479028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.479038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.479266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.479276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.479420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.479430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.479596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.479606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.479755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.479766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.479980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.479990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.480166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.480179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.480391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.480402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.480616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.480626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.480810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.480821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.481051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.481062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.481291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.481302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.481526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.481537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.481762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.481772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.481946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.481956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.482188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.482199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.482360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.482370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.482584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.482595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.482859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.482870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.483114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.483125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.483339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.597 [2024-07-24 18:22:19.483351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.597 qpair failed and we were unable to recover it. 00:27:26.597 [2024-07-24 18:22:19.483629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.483640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.483798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.483810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.484037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.484048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.484206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.484218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.484357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.484368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.484523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.484533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.484620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.484630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.484841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.484852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.485106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.485117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.485277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.485287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.485395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.485405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.485629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.485640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.485738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.485748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.485904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.485914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.486145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.486156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.486251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.486261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.486424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.486435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.486523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.486534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.486675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.486686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.486829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.486839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.486976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.486986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.487207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.487218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.487432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.487442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.487598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.487608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.487848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.487859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.488083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.488097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.488347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.488357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.488601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.488612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.488770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.488780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.489023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.489033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.489173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.489184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.489416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.489426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.489581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.489591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.489807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.489818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.489974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.489984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.490141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.490152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.490355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.490365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.490509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.598 [2024-07-24 18:22:19.490519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.598 qpair failed and we were unable to recover it. 00:27:26.598 [2024-07-24 18:22:19.490667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.490677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.490900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.490911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.491144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.491155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.491362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.491372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.491581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.491591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.491821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.491831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.492032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.492042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.492269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.492280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.492510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.492520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.492673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.492683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.492904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.492915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.493070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.493081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.493220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.493230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.493456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.493466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.493572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.493582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.493786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.493796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.494024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.494033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.494241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.494251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.494345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.494355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.494522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.494532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.494747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.494757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.494906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.494916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.495148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.495157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.495382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.495392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.495595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.495605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.495837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.495848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.496005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.496015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.496119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.496130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.496350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.496360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.496535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.496545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.496757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.496767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.599 qpair failed and we were unable to recover it. 00:27:26.599 [2024-07-24 18:22:19.496914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.599 [2024-07-24 18:22:19.496924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.497126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.497136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.497292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.497302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.497456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.497466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.497642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.497652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.497792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.497802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.497985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.497995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.498214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.498224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.498294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.498304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.498536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.498546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.498798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.498808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.499039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.499049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.499281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.499290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.499542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.499552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.499786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.499796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.500053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.500063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.500216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.500226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.500376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.500386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.500601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.500611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.500788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.500798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.501041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.501051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.501151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.501161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.501418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.501428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.501646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.501656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.501905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.501915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.502065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.502076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.502281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.502291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.502529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.502539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.502745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.502755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.502957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.502967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.503204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.503214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.503402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.503412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.503641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.503651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.503874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.503884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.504087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.504097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.504312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.504322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.504471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.504481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.504709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.600 [2024-07-24 18:22:19.504720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.600 qpair failed and we were unable to recover it. 00:27:26.600 [2024-07-24 18:22:19.504895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.504905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.505063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.505072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.505278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.505288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.505470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.505480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.505626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.505636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.505791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.505801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.506031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.506041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.506247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.506257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.506487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.506502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.506658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.506668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.506879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.506889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.507058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.507068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.507242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.507252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.507399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.507409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.507638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.507649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.507806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.507816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.507958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.507967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.508064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.508073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.508281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.508291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.508445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.508455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.508711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.508721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.508953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.508963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.509111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.509120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.509345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.509354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.509610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.509620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.509721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.509733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.509937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.509946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.510098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.510108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.510307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.510316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.510571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.510581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.510797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.510807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.511057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.511067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.511249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.511259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.511413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.511423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.511599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.511609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.511821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.511830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.512033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.512042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.512194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.601 [2024-07-24 18:22:19.512204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.601 qpair failed and we were unable to recover it. 00:27:26.601 [2024-07-24 18:22:19.512412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.512422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.512582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.512592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.512840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.512850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.513053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.513063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.513288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.513298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.513523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.513533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.513709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.513719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.513944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.513954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.514116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.514126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.514278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.514287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.514434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.514443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.514655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.514665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.514748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.514758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.514962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.514972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.515183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.515193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.515329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.515339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.515592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.515602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.515824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.515834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.516051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.516061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.516282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.516291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.516542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.516553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.516770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.516780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.516952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.516961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.517120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.517129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.517359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.517368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.517460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.517470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.517619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.517629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.517861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.517873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.518099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.518108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.518212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.518222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.518374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.518383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.518549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.518559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.518727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.518737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.518893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.518902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.519130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.519140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.519293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.519304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.519480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.519494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.523700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.602 [2024-07-24 18:22:19.523710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.602 qpair failed and we were unable to recover it. 00:27:26.602 [2024-07-24 18:22:19.523985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.523995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.524224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.524233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.524459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.524469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.524749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.524759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.524994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.525004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.525272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.525282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.525463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.525473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.525700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.525711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.525861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.525870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.526020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.526030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.526168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.526178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.526339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.526349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.526520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.526530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.526666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.526676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.526822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.526831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.526999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.527008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.527105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.527115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.527254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.527264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.527474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.527484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.527721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.527731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.527915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.527924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.528151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.528161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.528344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.528353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.528510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.528521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.528677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.528687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.528777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.528786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.528937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.528947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.529112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.529121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.529270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.529280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.529534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.529546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.529703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.529712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.529927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.529937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.530163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.530173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.530404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.530413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.530516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.530527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.530738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.530748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.530952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.530962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.603 [2024-07-24 18:22:19.531168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.603 [2024-07-24 18:22:19.531178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.603 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.531319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.531570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.531581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.531739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.531749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.531949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.531959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.532117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.532127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.532316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.532326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.532534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.532544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.532698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.532707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.532939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.532949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.533178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.533188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.533373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.533383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.533590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.533601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.533815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.533824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.533962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.533972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.534135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.534144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.534371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.534381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.534526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.534536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.534769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.534778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.535008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.535018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.535280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.535290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.535499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.535509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.535719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.535729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.535932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.535942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.536150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.536160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.536314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.536324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.536471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.536481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.536661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.536691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.536858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.536873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.537041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.537057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.537299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.537313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.537569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.537587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.537806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.604 [2024-07-24 18:22:19.537831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.604 qpair failed and we were unable to recover it. 00:27:26.604 [2024-07-24 18:22:19.538070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.538085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.538249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.538263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.538480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.538500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.538751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.538766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.538994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.539009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.539172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.539187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.539356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.539370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.539618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.539633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.539798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.539813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.540077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.540092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.540260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.540275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.540526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.540541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.540715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.540730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.540973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.540987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.541154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.541168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.541382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.541397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.541568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.541585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.541752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.541767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.541936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.541951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.542110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.542125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.542297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.542312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.542578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.542593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.542797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.542811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.543027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.543041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.543144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.543158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.543399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.543414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.543513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.543531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.543747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.543762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.544024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.544039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.544138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.544153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.544314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.544329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.544595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.544610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.544723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.544738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.544952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.544968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.545229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.545243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.545502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.545517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.545682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.545696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.605 [2024-07-24 18:22:19.545916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.605 [2024-07-24 18:22:19.545930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.605 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.546147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.546161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.546403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.546418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ddf30 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.546552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.546591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.546807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.546844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.547063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.547076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.547183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.547193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.547437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.547447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.547655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.547666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.547890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.547900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.548050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.548059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.548290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.548300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.548533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.548543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.548801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.548810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.549013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.549023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.549296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.549305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.549557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.549569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.549725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.549734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.549964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.549973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.550067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.550077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.550255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.550264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.550420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.550430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.550518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.550529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.550620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.550630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.550860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.550869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.551067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.551077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.551329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.551339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.551572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.551582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.551738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.551748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.551962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.551971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.552138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.552148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.552254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.552264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.552498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.552508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.552605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.552615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.552762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.552772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.552941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.552950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.553107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.553117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.553380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.553390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.606 [2024-07-24 18:22:19.553627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.606 [2024-07-24 18:22:19.553637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.606 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.553811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.553821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.554035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.554045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.554262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.554272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.554430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.554440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.554595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.554605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.554809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.554819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.555073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.555083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.555352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.555362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.555466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.555476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.555709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.555720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.555858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.555868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.556118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.556128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.556371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.556381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.556614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.556624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.556886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.556896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.557047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.557057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.557264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.557274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.557425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.557436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.557597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.557607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.557837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.557846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.558087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.558097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.558345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.558354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.558580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.558590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.558863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.558873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.559121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.559132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.559229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.559239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.559337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.559347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.559499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.559509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.559601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.559612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.559866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.559876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.560039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.560049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.560257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.560267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.560520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.560530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.560624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.560634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.560881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.560891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.561039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.561048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.561251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.561261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.607 qpair failed and we were unable to recover it. 00:27:26.607 [2024-07-24 18:22:19.561431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.607 [2024-07-24 18:22:19.561441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.561621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.561631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.561783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.561793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.561970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.561979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.562129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.562138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.562365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.562375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.562605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.562615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.562780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.562790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.562947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.562956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.563174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.563184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.563439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.563449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.563618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.563628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.563781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.563791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.563958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.563967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.564171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.564181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.564409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.564419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.564520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.564530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.564666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.564676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.564829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.564838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.565013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.565023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.565118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.565129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.565359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.565369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.565574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.565584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.565755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.565765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.565921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.565930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.566159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.566168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.566395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.566405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.566666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.566676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.566880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.566890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.567110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.567120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.567324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.567334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.567544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.567555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.608 qpair failed and we were unable to recover it. 00:27:26.608 [2024-07-24 18:22:19.567709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.608 [2024-07-24 18:22:19.567719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.567901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.567911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.568104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.568114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.568276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.568285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.568488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.568501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.568674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.568684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.568840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.568850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.569024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.569033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.569310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.569320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.569508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.569519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.569662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.569672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.569929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.569938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.570140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.570150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.570402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.570412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.570552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.570562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.570666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.570676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.570894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.570903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.571124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.571285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.571295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.571502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.571512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.571667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.571677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.571906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.571916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.572141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.572151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.572300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.572310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.572558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.572568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.572741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.572751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.573001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.573011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.573107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.573117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.573369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.573380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.573576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.573586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.573742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.573752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.573983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.573993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.574158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.574167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.574378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.574387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.574552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.574563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.574659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.574670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.574829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.574839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.575042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.609 [2024-07-24 18:22:19.575051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.609 qpair failed and we were unable to recover it. 00:27:26.609 [2024-07-24 18:22:19.575208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.575218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.575353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.575363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.575512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.575522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.575678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.575688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.575846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.575856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.576012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.576021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.576252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.576262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.576350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.576360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.576540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.576550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.576702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.576712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.576965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.576975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.577125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.577134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.577355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.577364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.577593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.577604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.577813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.577823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.577998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.578008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.578229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.578239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.578437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.578447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.578678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.578688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.578981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.578991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.579246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.579255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.579411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.579421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.579571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.579581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.579680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.579690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.579921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.579931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.580166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.580176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.580261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.580270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.580478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.580487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.580665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.580676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.580893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.580903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.581121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.581133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.581363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.581372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.581526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.581536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.581713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.581723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.581878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.581888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.582121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.582130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.582349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.582358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.610 [2024-07-24 18:22:19.582564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.610 [2024-07-24 18:22:19.582574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.610 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.582780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.582790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.583011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.583021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.583223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.583233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.583459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.583469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.583699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.583709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.583863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.583873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.584024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.584034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.584239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.584249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.584404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.584413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.584619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.584629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.584781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.584790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.584998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.585008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.585212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.585222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.585310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.585320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.585536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.585546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.585781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.585790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.585877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.585888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.586119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.586128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.586214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.586224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.586389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.586402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.586566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.586576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.586806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.586815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.587048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.587058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.587199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.587210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.587442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.587452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.587610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.587620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.587832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.587842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.588073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.588083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.588318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.588328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.588553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.588563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.588717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.588728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.588952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.588962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.589045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.589054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.589201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.589211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.589411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.589420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.589567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.589577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.589812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.589821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.611 [2024-07-24 18:22:19.589985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.611 [2024-07-24 18:22:19.589995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.611 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.590228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.590237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.590449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.590459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.590610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.590621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.590853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.590863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.591087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.591096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.591322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.591332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.591421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.591431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.591661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.591671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.591931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.591941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.592043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.592053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.592256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.592266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.592498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.592508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.592651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.592661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.592830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.592839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.593042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.593051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.593214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.593225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.593480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.593493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.593700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.593710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.593963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.593973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.594168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.594178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.594325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.594335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.594472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.594484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.594649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.594659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.594781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.594791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.595059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.595069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.595209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.595220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.595449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.595459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.595637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.595647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.595880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.595890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.596074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.596084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.596301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.596310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.596568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.596578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.596736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.596745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.596932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.596942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.597194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.597204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.597468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.597478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.597720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.612 [2024-07-24 18:22:19.597730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.612 qpair failed and we were unable to recover it. 00:27:26.612 [2024-07-24 18:22:19.597938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.597949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.598108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.598118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.598324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.598334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.598513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.598523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.598732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.598742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.598842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.598851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.599078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.599088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.599224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.599234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.599452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.599462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.599647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.599657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.599755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.599764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.599905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.599915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.600068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.600078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.600287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.600297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.600396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.600406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.600514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.600525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.600664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.600674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.600886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.600896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.601076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.601086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.601235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.601245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.601401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.601411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.601624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.601634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.601799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.601809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.602036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.602046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.602259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.602271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.602480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.602494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.602661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.602671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.602841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.602852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.602939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.602950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.603163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.603173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.603343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.603353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.603508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.603518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.613 [2024-07-24 18:22:19.603725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.613 [2024-07-24 18:22:19.603735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.613 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.603940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.603950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.604099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.604110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.604265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.604274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.604414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.604424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.604575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.604585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.604734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.604744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.604948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.604959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.605129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.605140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.605333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.605342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.605580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.605590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.605749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.605759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.605917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.605926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.606091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.606101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.606257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.606267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.606414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.606424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.606590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.606600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.606696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.606705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.606797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.606807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.607037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.607047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.607191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.607201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.607348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.607357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.607563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.607574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.607715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.607725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.607931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.607941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.608144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.608154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.608373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.608383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.614 [2024-07-24 18:22:19.608612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.614 [2024-07-24 18:22:19.608622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.614 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.608759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.608769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.608918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.608928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.609069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.609079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.609228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.609238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.609325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.609337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.609540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.609551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.609709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.609719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.609807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.609817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.609962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.609972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.610198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.610207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.610440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.610450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.610611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.610621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.610759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.610770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.611025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.611035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.611252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.611261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.611342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.611351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.611501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.611511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.611689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.611700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.611794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.611804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.611953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.611963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.612203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.612213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.612450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.612460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.612706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.612716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.612923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.612933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.613013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.613023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.613206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.613216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.613358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.613368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.613573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.613584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.613830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.613840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.613986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.613996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.614095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.614105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.614309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.614319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.614407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.614418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.614650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.614661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.614735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.614745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.614972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.614982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.615142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.615 [2024-07-24 18:22:19.615152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.615 qpair failed and we were unable to recover it. 00:27:26.615 [2024-07-24 18:22:19.615317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.615327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.615432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.615442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.615584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.615595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.615801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.615811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.616066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.616077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.616301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.616310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.616519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.616529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.616678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.616690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.616832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.616843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.617002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.617012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.617179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.617189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.617307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.617317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.617472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.617481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.617591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.617614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.617787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.617803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.617897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.617912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.618177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.618192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.618345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.618361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.618603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.618619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.618860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.618875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.619123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.619139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.619376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.619389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.619528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.619539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.619693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.619703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.619908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.619918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.620162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.620172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.620333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.620343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.620561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.620571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.620713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.620723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.620808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.620819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.621070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.621081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.621239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.621249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.621351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.621361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.621501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.621512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.621667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.621677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.621764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.621774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.621932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.621942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.616 qpair failed and we were unable to recover it. 00:27:26.616 [2024-07-24 18:22:19.622047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.616 [2024-07-24 18:22:19.622058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.622154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.622164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.622265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.622275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.622418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.622428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.622659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.622670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.622756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.622767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.622944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.622954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.623055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.623065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.617 [2024-07-24 18:22:19.623212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.617 [2024-07-24 18:22:19.623222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.617 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.623319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.623330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.623420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.623433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.623588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.623600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.623676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.623687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.623772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.623783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.623890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.623900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.624107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.624118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.624272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.624282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.624355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.624365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.624532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.624542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.624638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.624648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.624787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.624798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.880 [2024-07-24 18:22:19.625951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.880 [2024-07-24 18:22:19.625961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.880 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.626167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.626177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.626353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.626363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.626460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.626471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.626667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.626678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.626821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.626831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.626924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.626934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.627918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.627928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.628024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.628034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.628162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.628173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.628260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.628271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.628437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.628447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.628610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.628621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.628717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.628728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.628956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.628969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.629065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.629075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.629228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.629238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.629388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.629398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.629513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.629524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.629680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.629691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.629864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.629874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.629952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.629963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.630110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.630119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.630271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.630281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.630437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.630448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.630583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.630594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.630730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.630740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.630846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.630855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.630943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.630953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.631101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.631112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.631266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.881 [2024-07-24 18:22:19.631276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.881 qpair failed and we were unable to recover it. 00:27:26.881 [2024-07-24 18:22:19.631358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.631368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.631459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.631469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.631630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.631640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.631801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.631812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.631982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.631991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.632146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.632156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.632310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.632320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.632415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.632425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.632512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.632522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.632593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.632603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.632704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.632714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.632814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.632824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7834000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f7844000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.633947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.633957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.634093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.634104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.634246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.634256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.634353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.634365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.634587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.634597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.634691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.634701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.634883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.634893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.635049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.635059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.635198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.635208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.635285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.635295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.635436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.635446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.635604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.635615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.635767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.635776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.635911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.635921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.636001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.636012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.636167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.636177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.636352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.636362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.882 qpair failed and we were unable to recover it. 00:27:26.882 [2024-07-24 18:22:19.636507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.882 [2024-07-24 18:22:19.636517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.636602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.636612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.636754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.636763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.636862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.636872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.637034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.637044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.637188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.637197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.637340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.637350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.637499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.637510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.637760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.637770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.637941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.637951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.638101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.638112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.638207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.638216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.638377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.638387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.638542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.638553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.638641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.638650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.638835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.638844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.638950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.638960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.639914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.639924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.640033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.640227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.640414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.640505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.640606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.640693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.640909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.640993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.641003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.641075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.641084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.641172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.641182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.641343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.641353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.641426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.641435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.641573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.883 [2024-07-24 18:22:19.641584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.883 qpair failed and we were unable to recover it. 00:27:26.883 [2024-07-24 18:22:19.641672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.641683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.641773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.641783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.641871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.641882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.641958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.641969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.642067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.642261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.642354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.642516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.642686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.642765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.642850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.642989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.643000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.643163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.643173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.643260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.643270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.643474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.643484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.643579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.643590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.643737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.643746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.643924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.643934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.644092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.644102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.644245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.644256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.644459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.644469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.644633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.644643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.644778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.644788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.644889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.644898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.645930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.645939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.646028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.646038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.646119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.646130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.646287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.646297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.884 [2024-07-24 18:22:19.646442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.884 [2024-07-24 18:22:19.646452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.884 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.646625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.646636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.646729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.646738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.646881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.646890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.647036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.647046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.647203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.647212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.647377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.647387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.647545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.647555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.647696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.647706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.647842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.647852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.648000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.648010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.648150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.648160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.648296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.648306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.648383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.648393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.648473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.648482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.648712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.648722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.648788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.648798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.649024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.649034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.649191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.649200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.649378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.649388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.649484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.649498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.649666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.649676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.649762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.649772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.649912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.649921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.650074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.650083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.650289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.650299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.650481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.650495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.650633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.650643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.650816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.650825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.651063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.651073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.651244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.651254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.651348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.651358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.651425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.885 [2024-07-24 18:22:19.651435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.885 qpair failed and we were unable to recover it. 00:27:26.885 [2024-07-24 18:22:19.651638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.651650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.651851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.651861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.652091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.652100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.652259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.652269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.652450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.652459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.652665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.652675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.652898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.652908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.653141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.653151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.653332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.653342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.653512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.653522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.653734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.653743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.653891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.653901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.654134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.654144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.654375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.654385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.654605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.654615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.654847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.654857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.654950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.654960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.655116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.655125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.655282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.655292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.655496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.655506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.655725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.655735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.655979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.655988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.656210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.656220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.656488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.656503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.656709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.656718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.656968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.656978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.657128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.657138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.657376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.657386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.657604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.657614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.657764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.657774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.657930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.657939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.658088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.658098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.658349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.658359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.658592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.658602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.658755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.658765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.658913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.658922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.659071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.659081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.659261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.886 [2024-07-24 18:22:19.659271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.886 qpair failed and we were unable to recover it. 00:27:26.886 [2024-07-24 18:22:19.659472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.887 [2024-07-24 18:22:19.659482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f783c000b90 with addr=10.0.0.2, port=4420 00:27:26.887 qpair failed and we were unable to recover it. 00:27:26.887 [2024-07-24 18:22:19.659807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.887 [2024-07-24 18:22:19.659885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15ebff0 with addr=10.0.0.2, port=4420 00:27:26.887 [2024-07-24 18:22:19.659916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ebff0 is same with the state(5) to be set 00:27:26.887 [2024-07-24 18:22:19.659955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ebff0 (9): Bad file descriptor 00:27:26.887 [2024-07-24 18:22:19.659982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:26.887 [2024-07-24 18:22:19.660002] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:26.887 [2024-07-24 18:22:19.660025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:26.887 Unable to reset the controller. 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 Malloc0 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 [2024-07-24 18:22:20.150638] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 [2024-07-24 18:22:20.175569] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.146 18:22:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3563162 00:27:27.714 Controller properly reset. 00:27:32.983 Initializing NVMe Controllers 00:27:32.983 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:32.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:32.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:32.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:32.983 Initialization complete. Launching workers. 00:27:32.983 Starting thread on core 1 00:27:32.983 Starting thread on core 2 00:27:32.983 Starting thread on core 3 00:27:32.983 Starting thread on core 0 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:32.983 00:27:32.983 real 0m11.137s 00:27:32.983 user 0m36.883s 00:27:32.983 sys 0m5.678s 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.983 ************************************ 00:27:32.983 END TEST nvmf_target_disconnect_tc2 00:27:32.983 ************************************ 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.983 rmmod nvme_tcp 00:27:32.983 rmmod nvme_fabrics 00:27:32.983 rmmod nvme_keyring 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3563796 ']' 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3563796 00:27:32.983 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 3563796 ']' 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 3563796 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3563796 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3563796' 00:27:32.984 killing process with pid 3563796 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 3563796 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 3563796 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.984 18:22:25 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.888 18:22:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.888 00:27:34.888 real 0m19.335s 00:27:34.888 user 1m3.256s 00:27:34.888 sys 0m10.344s 00:27:34.888 18:22:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.888 18:22:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:34.888 ************************************ 00:27:34.888 END TEST nvmf_target_disconnect 00:27:34.888 ************************************ 00:27:34.888 18:22:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:34.888 00:27:34.888 real 5m49.101s 00:27:34.888 user 11m3.752s 00:27:34.888 sys 1m50.347s 00:27:34.888 18:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.888 18:22:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.888 ************************************ 00:27:34.888 END TEST nvmf_host 00:27:34.888 ************************************ 00:27:34.888 00:27:34.888 real 20m57.476s 00:27:34.888 user 45m17.178s 00:27:34.888 sys 6m26.133s 00:27:34.888 18:22:27 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:34.888 18:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.888 ************************************ 00:27:34.888 END TEST nvmf_tcp 00:27:34.888 ************************************ 00:27:34.888 18:22:27 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:27:34.888 18:22:27 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:34.888 18:22:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:34.888 18:22:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:34.888 18:22:27 -- common/autotest_common.sh@10 -- # set +x 00:27:34.888 ************************************ 00:27:34.888 START TEST spdkcli_nvmf_tcp 00:27:34.888 ************************************ 00:27:34.888 18:22:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:35.147 * Looking for test storage... 00:27:35.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3565388 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3565388 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 3565388 ']' 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:35.147 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.147 [2024-07-24 18:22:28.114807] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:27:35.147 [2024-07-24 18:22:28.114856] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3565388 ] 00:27:35.147 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.147 [2024-07-24 18:22:28.169404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:35.405 [2024-07-24 18:22:28.242852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.405 [2024-07-24 18:22:28.242854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.971 18:22:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:35.971 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:35.971 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:35.971 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:35.971 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:35.971 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:35.971 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:35.971 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:35.971 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:35.971 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:35.971 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:35.971 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:35.971 ' 00:27:38.504 [2024-07-24 18:22:31.309699] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.440 [2024-07-24 18:22:32.485650] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:41.972 [2024-07-24 18:22:34.648206] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:43.876 [2024-07-24 18:22:36.505941] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:45.251 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:45.251 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:45.251 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:45.251 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:45.251 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:45.251 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:45.251 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:45.251 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:45.251 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:45.251 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:45.251 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:45.251 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:45.251 18:22:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.510 18:22:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:45.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:45.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:45.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:45.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:45.510 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:45.510 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:45.510 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:45.510 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:45.510 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:45.510 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:45.510 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:45.510 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:45.510 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:45.510 ' 00:27:50.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:50.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:50.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:50.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:50.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:50.778 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:50.778 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:50.778 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:50.778 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:50.778 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:50.778 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:50.778 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:50.778 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:50.778 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3565388 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3565388 ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3565388 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3565388 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3565388' 00:27:50.778 killing process with pid 3565388 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 3565388 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 3565388 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3565388 ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3565388 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 3565388 ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 3565388 00:27:50.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3565388) - No such process 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 3565388 is not found' 00:27:50.778 Process with pid 3565388 is not found 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:50.778 00:27:50.778 real 0m15.878s 00:27:50.778 user 0m32.981s 00:27:50.778 sys 0m0.739s 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:50.778 18:22:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.778 ************************************ 00:27:50.778 END TEST spdkcli_nvmf_tcp 00:27:50.778 ************************************ 00:27:50.778 18:22:43 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:50.778 18:22:43 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:50.778 18:22:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:50.778 18:22:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.038 ************************************ 00:27:51.038 START TEST nvmf_identify_passthru 00:27:51.038 ************************************ 00:27:51.038 18:22:43 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:51.038 * Looking for test storage... 00:27:51.038 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:51.038 18:22:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.038 18:22:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.038 18:22:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.038 18:22:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:51.038 18:22:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:51.038 18:22:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.038 18:22:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.038 18:22:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:51.038 18:22:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.038 18:22:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.038 18:22:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:51.038 18:22:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.038 18:22:43 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.038 18:22:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.312 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.312 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:56.312 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:56.312 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:56.312 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:56.312 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:56.312 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:56.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:56.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:56.313 Found net devices under 0000:86:00.0: cvl_0_0 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:56.313 Found net devices under 0000:86:00.1: cvl_0_1 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:56.313 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:56.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:27:56.573 00:27:56.573 --- 10.0.0.2 ping statistics --- 00:27:56.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.573 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.081 ms 00:27:56.573 00:27:56.573 --- 10.0.0.1 ping statistics --- 00:27:56.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.573 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:56.573 18:22:49 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:56.573 18:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.573 18:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5f:00.0 00:27:56.573 18:22:49 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5f:00.0 00:27:56.573 18:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5f:00.0 00:27:56.573 18:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5f:00.0 ']' 00:27:56.573 18:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:56.573 18:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5f:00.0' -i 0 00:27:56.573 18:22:49 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:56.573 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.847 18:22:54 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN025500DK1P6AGN 00:28:01.847 18:22:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5f:00.0' -i 0 00:28:01.847 18:22:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:01.847 18:22:54 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:01.847 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.035 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:06.035 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:06.035 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:06.035 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:06.293 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:06.293 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3572427 00:28:06.293 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:06.293 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:06.293 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3572427 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 3572427 ']' 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:06.293 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:06.293 [2024-07-24 18:22:59.191176] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:28:06.293 [2024-07-24 18:22:59.191223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:06.293 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.293 [2024-07-24 18:22:59.248483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:06.293 [2024-07-24 18:22:59.328155] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:06.294 [2024-07-24 18:22:59.328196] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:06.294 [2024-07-24 18:22:59.328203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:06.294 [2024-07-24 18:22:59.328209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:06.294 [2024-07-24 18:22:59.328214] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:06.294 [2024-07-24 18:22:59.328255] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.294 [2024-07-24 18:22:59.328350] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:06.294 [2024-07-24 18:22:59.328436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:06.294 [2024-07-24 18:22:59.328437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.229 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:07.229 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:28:07.229 18:22:59 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:07.229 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.229 18:22:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:07.229 INFO: Log level set to 20 00:28:07.229 INFO: Requests: 00:28:07.229 { 00:28:07.229 "jsonrpc": "2.0", 00:28:07.229 "method": "nvmf_set_config", 00:28:07.229 "id": 1, 00:28:07.229 "params": { 00:28:07.229 "admin_cmd_passthru": { 00:28:07.229 "identify_ctrlr": true 00:28:07.229 } 00:28:07.229 } 00:28:07.229 } 00:28:07.229 00:28:07.229 INFO: response: 00:28:07.229 { 00:28:07.229 "jsonrpc": "2.0", 00:28:07.229 "id": 1, 00:28:07.229 "result": true 00:28:07.229 } 00:28:07.229 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.229 18:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:07.229 INFO: Setting log level to 20 00:28:07.229 INFO: Setting log level to 20 00:28:07.229 INFO: Log level set to 20 00:28:07.229 INFO: Log level set to 20 00:28:07.229 INFO: Requests: 00:28:07.229 { 00:28:07.229 "jsonrpc": "2.0", 00:28:07.229 "method": "framework_start_init", 00:28:07.229 "id": 1 00:28:07.229 } 00:28:07.229 00:28:07.229 INFO: Requests: 00:28:07.229 { 00:28:07.229 "jsonrpc": "2.0", 00:28:07.229 "method": "framework_start_init", 00:28:07.229 "id": 1 00:28:07.229 } 00:28:07.229 00:28:07.229 [2024-07-24 18:23:00.090356] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:07.229 INFO: response: 00:28:07.229 { 00:28:07.229 "jsonrpc": "2.0", 00:28:07.229 "id": 1, 00:28:07.229 "result": true 00:28:07.229 } 00:28:07.229 00:28:07.229 INFO: response: 00:28:07.229 { 00:28:07.229 "jsonrpc": "2.0", 00:28:07.229 "id": 1, 00:28:07.229 "result": true 00:28:07.229 } 00:28:07.229 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.229 18:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:07.229 INFO: Setting log level to 40 00:28:07.229 INFO: Setting log level to 40 00:28:07.229 INFO: Setting log level to 40 00:28:07.229 [2024-07-24 18:23:00.099903] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.229 18:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:07.229 18:23:00 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5f:00.0 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.229 18:23:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.522 Nvme0n1 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.522 18:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.522 18:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.522 18:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.522 [2024-07-24 18:23:02.992366] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.522 18:23:02 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.522 18:23:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.522 [ 00:28:10.522 { 00:28:10.522 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:10.522 "subtype": "Discovery", 00:28:10.522 "listen_addresses": [], 00:28:10.522 "allow_any_host": true, 00:28:10.522 "hosts": [] 00:28:10.522 }, 00:28:10.522 { 00:28:10.522 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.522 "subtype": "NVMe", 00:28:10.522 "listen_addresses": [ 00:28:10.522 { 00:28:10.522 "trtype": "TCP", 00:28:10.522 "adrfam": "IPv4", 00:28:10.522 "traddr": "10.0.0.2", 00:28:10.522 "trsvcid": "4420" 00:28:10.522 } 00:28:10.522 ], 00:28:10.522 "allow_any_host": true, 00:28:10.522 "hosts": [], 00:28:10.522 "serial_number": "SPDK00000000000001", 00:28:10.522 "model_number": "SPDK bdev Controller", 00:28:10.522 "max_namespaces": 1, 00:28:10.522 "min_cntlid": 1, 00:28:10.522 "max_cntlid": 65519, 00:28:10.522 "namespaces": [ 00:28:10.522 { 00:28:10.522 "nsid": 1, 00:28:10.522 "bdev_name": "Nvme0n1", 00:28:10.522 "name": "Nvme0n1", 00:28:10.522 "nguid": "11179CEB0B44447EAB49E869F12618F6", 00:28:10.522 "uuid": "11179ceb-0b44-447e-ab49-e869f12618f6" 00:28:10.522 } 00:28:10.522 ] 00:28:10.522 } 00:28:10.522 ] 00:28:10.522 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:10.522 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN025500DK1P6AGN 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:10.522 EAL: No free 2048 kB hugepages reported on node 1 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN025500DK1P6AGN '!=' PHLN025500DK1P6AGN ']' 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.522 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.522 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:10.522 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:10.522 18:23:03 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:10.522 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:10.522 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:10.522 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:10.522 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:10.522 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:10.522 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:10.522 rmmod nvme_tcp 00:28:10.522 rmmod nvme_fabrics 00:28:10.522 rmmod nvme_keyring 00:28:10.522 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:10.784 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:10.784 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:10.784 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3572427 ']' 00:28:10.784 18:23:03 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3572427 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 3572427 ']' 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 3572427 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3572427 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3572427' 00:28:10.784 killing process with pid 3572427 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 3572427 00:28:10.784 18:23:03 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 3572427 00:28:12.688 18:23:05 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.688 18:23:05 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.688 18:23:05 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.688 18:23:05 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.688 18:23:05 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.688 18:23:05 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.688 18:23:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:12.688 18:23:05 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.223 18:23:07 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:15.223 00:28:15.223 real 0m23.911s 00:28:15.223 user 0m34.085s 00:28:15.223 sys 0m5.083s 00:28:15.223 18:23:07 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:15.223 18:23:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:15.223 ************************************ 00:28:15.223 END TEST nvmf_identify_passthru 00:28:15.223 ************************************ 00:28:15.223 18:23:07 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:15.223 18:23:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:15.223 18:23:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:15.223 18:23:07 -- common/autotest_common.sh@10 -- # set +x 00:28:15.223 ************************************ 00:28:15.223 START TEST nvmf_dif 00:28:15.223 ************************************ 00:28:15.223 18:23:07 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:15.223 * Looking for test storage... 00:28:15.223 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:15.223 18:23:07 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:15.223 18:23:07 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:15.223 18:23:07 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:15.223 18:23:07 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:15.223 18:23:07 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.223 18:23:07 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.223 18:23:07 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.223 18:23:07 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:15.223 18:23:07 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:15.223 18:23:07 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:15.223 18:23:07 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:15.223 18:23:07 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:15.223 18:23:07 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:15.223 18:23:07 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:15.223 18:23:07 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:15.223 18:23:07 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:15.223 18:23:07 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:15.223 18:23:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:20.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:20.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:20.518 Found net devices under 0000:86:00.0: cvl_0_0 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:20.518 Found net devices under 0000:86:00.1: cvl_0_1 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:20.518 18:23:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:20.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:20.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:28:20.519 00:28:20.519 --- 10.0.0.2 ping statistics --- 00:28:20.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.519 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:28:20.519 18:23:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:20.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:20.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:28:20.519 00:28:20.519 --- 10.0.0.1 ping statistics --- 00:28:20.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:20.519 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:28:20.519 18:23:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:20.519 18:23:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:20.519 18:23:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:20.519 18:23:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:23.047 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:5f:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:23.047 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:23.047 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:23.047 18:23:15 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:23.047 18:23:15 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3578126 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3578126 00:28:23.047 18:23:15 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 3578126 ']' 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:23.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:23.047 18:23:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:23.047 [2024-07-24 18:23:15.891847] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:28:23.047 [2024-07-24 18:23:15.891889] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:23.047 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.047 [2024-07-24 18:23:15.949317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.047 [2024-07-24 18:23:16.027606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.047 [2024-07-24 18:23:16.027640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.047 [2024-07-24 18:23:16.027647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.047 [2024-07-24 18:23:16.027656] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.047 [2024-07-24 18:23:16.027660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.047 [2024-07-24 18:23:16.027677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:28:24.011 18:23:16 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.011 18:23:16 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:24.011 18:23:16 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:24.011 18:23:16 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.011 [2024-07-24 18:23:16.737534] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.011 18:23:16 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:24.011 18:23:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:24.011 ************************************ 00:28:24.011 START TEST fio_dif_1_default 00:28:24.011 ************************************ 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.011 bdev_null0 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:24.011 [2024-07-24 18:23:16.805828] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:24.011 { 00:28:24.011 "params": { 00:28:24.011 "name": "Nvme$subsystem", 00:28:24.011 "trtype": "$TEST_TRANSPORT", 00:28:24.011 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:24.011 "adrfam": "ipv4", 00:28:24.011 "trsvcid": "$NVMF_PORT", 00:28:24.011 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:24.011 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:24.011 "hdgst": ${hdgst:-false}, 00:28:24.011 "ddgst": ${ddgst:-false} 00:28:24.011 }, 00:28:24.011 "method": "bdev_nvme_attach_controller" 00:28:24.011 } 00:28:24.011 EOF 00:28:24.011 )") 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:24.011 18:23:16 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:24.011 "params": { 00:28:24.011 "name": "Nvme0", 00:28:24.011 "trtype": "tcp", 00:28:24.011 "traddr": "10.0.0.2", 00:28:24.011 "adrfam": "ipv4", 00:28:24.011 "trsvcid": "4420", 00:28:24.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:24.012 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:24.012 "hdgst": false, 00:28:24.012 "ddgst": false 00:28:24.012 }, 00:28:24.012 "method": "bdev_nvme_attach_controller" 00:28:24.012 }' 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:24.012 18:23:16 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:24.277 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:24.277 fio-3.35 00:28:24.277 Starting 1 thread 00:28:24.277 EAL: No free 2048 kB hugepages reported on node 1 00:28:36.477 00:28:36.477 filename0: (groupid=0, jobs=1): err= 0: pid=3578504: Wed Jul 24 18:23:27 2024 00:28:36.477 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:28:36.477 slat (nsec): min=5743, max=25527, avg=6119.87, stdev=1277.14 00:28:36.477 clat (usec): min=40806, max=47598, avg=41022.39, stdev=437.62 00:28:36.477 lat (usec): min=40811, max=47624, avg=41028.51, stdev=438.10 00:28:36.477 clat percentiles (usec): 00:28:36.477 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:36.477 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:36.477 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:36.477 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47449], 99.95th=[47449], 00:28:36.477 | 99.99th=[47449] 00:28:36.477 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:28:36.477 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:36.477 lat (msec) : 50=100.00% 00:28:36.477 cpu : usr=95.04%, sys=4.71%, ctx=9, majf=0, minf=232 00:28:36.477 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:36.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:36.477 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:36.477 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:36.477 00:28:36.477 Run status group 0 (all jobs): 00:28:36.477 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 00:28:36.477 real 0m11.002s 00:28:36.477 user 0m16.168s 00:28:36.477 sys 0m0.764s 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 ************************************ 00:28:36.477 END TEST fio_dif_1_default 00:28:36.477 ************************************ 00:28:36.477 18:23:27 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:36.477 18:23:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:36.477 18:23:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 ************************************ 00:28:36.477 START TEST fio_dif_1_multi_subsystems 00:28:36.477 ************************************ 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 bdev_null0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 [2024-07-24 18:23:27.880062] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 bdev_null1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:36.477 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.477 { 00:28:36.477 "params": { 00:28:36.477 "name": "Nvme$subsystem", 00:28:36.477 "trtype": "$TEST_TRANSPORT", 00:28:36.477 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.477 "adrfam": "ipv4", 00:28:36.477 "trsvcid": "$NVMF_PORT", 00:28:36.477 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.477 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.477 "hdgst": ${hdgst:-false}, 00:28:36.478 "ddgst": ${ddgst:-false} 00:28:36.478 }, 00:28:36.478 "method": "bdev_nvme_attach_controller" 00:28:36.478 } 00:28:36.478 EOF 00:28:36.478 )") 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:36.478 { 00:28:36.478 "params": { 00:28:36.478 "name": "Nvme$subsystem", 00:28:36.478 "trtype": "$TEST_TRANSPORT", 00:28:36.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:36.478 "adrfam": "ipv4", 00:28:36.478 "trsvcid": "$NVMF_PORT", 00:28:36.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:36.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:36.478 "hdgst": ${hdgst:-false}, 00:28:36.478 "ddgst": ${ddgst:-false} 00:28:36.478 }, 00:28:36.478 "method": "bdev_nvme_attach_controller" 00:28:36.478 } 00:28:36.478 EOF 00:28:36.478 )") 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:36.478 "params": { 00:28:36.478 "name": "Nvme0", 00:28:36.478 "trtype": "tcp", 00:28:36.478 "traddr": "10.0.0.2", 00:28:36.478 "adrfam": "ipv4", 00:28:36.478 "trsvcid": "4420", 00:28:36.478 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:36.478 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:36.478 "hdgst": false, 00:28:36.478 "ddgst": false 00:28:36.478 }, 00:28:36.478 "method": "bdev_nvme_attach_controller" 00:28:36.478 },{ 00:28:36.478 "params": { 00:28:36.478 "name": "Nvme1", 00:28:36.478 "trtype": "tcp", 00:28:36.478 "traddr": "10.0.0.2", 00:28:36.478 "adrfam": "ipv4", 00:28:36.478 "trsvcid": "4420", 00:28:36.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:36.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:36.478 "hdgst": false, 00:28:36.478 "ddgst": false 00:28:36.478 }, 00:28:36.478 "method": "bdev_nvme_attach_controller" 00:28:36.478 }' 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:36.478 18:23:27 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:36.478 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:36.478 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:36.478 fio-3.35 00:28:36.478 Starting 2 threads 00:28:36.478 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.459 00:28:46.459 filename0: (groupid=0, jobs=1): err= 0: pid=3580473: Wed Jul 24 18:23:38 2024 00:28:46.459 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:28:46.459 slat (nsec): min=5795, max=25825, avg=7592.60, stdev=2616.04 00:28:46.459 clat (usec): min=40796, max=42756, avg=41005.28, stdev=184.34 00:28:46.459 lat (usec): min=40802, max=42782, avg=41012.88, stdev=184.74 00:28:46.459 clat percentiles (usec): 00:28:46.459 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:46.459 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:46.459 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:28:46.459 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:28:46.459 | 99.99th=[42730] 00:28:46.459 bw ( KiB/s): min= 384, max= 416, per=33.77%, avg=388.80, stdev=11.72, samples=20 00:28:46.459 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:28:46.459 lat (msec) : 50=100.00% 00:28:46.459 cpu : usr=98.06%, sys=1.69%, ctx=10, majf=0, minf=181 00:28:46.459 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.459 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.459 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:46.459 filename1: (groupid=0, jobs=1): err= 0: pid=3580474: Wed Jul 24 18:23:38 2024 00:28:46.459 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10041msec) 00:28:46.459 slat (nsec): min=5819, max=25300, avg=6972.16, stdev=2027.26 00:28:46.459 clat (usec): min=447, max=42935, avg=21030.45, stdev=20477.34 00:28:46.459 lat (usec): min=453, max=42961, avg=21037.42, stdev=20476.69 00:28:46.459 clat percentiles (usec): 00:28:46.459 | 1.00th=[ 461], 5.00th=[ 465], 10.00th=[ 469], 20.00th=[ 478], 00:28:46.459 | 30.00th=[ 490], 40.00th=[ 594], 50.00th=[41157], 60.00th=[41157], 00:28:46.459 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:28:46.459 | 99.00th=[41681], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:28:46.459 | 99.99th=[42730] 00:28:46.459 bw ( KiB/s): min= 704, max= 768, per=66.24%, avg=761.60, stdev=19.70, samples=20 00:28:46.459 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:28:46.459 lat (usec) : 500=33.91%, 750=15.88%, 1000=0.10% 00:28:46.459 lat (msec) : 50=50.10% 00:28:46.459 cpu : usr=97.63%, sys=2.11%, ctx=13, majf=0, minf=61 00:28:46.459 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:46.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.459 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:46.459 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:46.459 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:46.459 00:28:46.459 Run status group 0 (all jobs): 00:28:46.459 READ: bw=1149KiB/s (1176kB/s), 390KiB/s-760KiB/s (399kB/s-778kB/s), io=11.3MiB (11.8MB), run=10011-10041msec 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 00:28:46.459 real 0m11.270s 00:28:46.459 user 0m26.226s 00:28:46.459 sys 0m0.689s 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 ************************************ 00:28:46.459 END TEST fio_dif_1_multi_subsystems 00:28:46.459 ************************************ 00:28:46.459 18:23:39 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:46.459 18:23:39 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:46.459 18:23:39 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 ************************************ 00:28:46.459 START TEST fio_dif_rand_params 00:28:46.459 ************************************ 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 bdev_null0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:46.459 [2024-07-24 18:23:39.218871] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:46.459 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:46.460 { 00:28:46.460 "params": { 00:28:46.460 "name": "Nvme$subsystem", 00:28:46.460 "trtype": "$TEST_TRANSPORT", 00:28:46.460 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:46.460 "adrfam": "ipv4", 00:28:46.460 "trsvcid": "$NVMF_PORT", 00:28:46.460 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:46.460 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:46.460 "hdgst": ${hdgst:-false}, 00:28:46.460 "ddgst": ${ddgst:-false} 00:28:46.460 }, 00:28:46.460 "method": "bdev_nvme_attach_controller" 00:28:46.460 } 00:28:46.460 EOF 00:28:46.460 )") 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:46.460 "params": { 00:28:46.460 "name": "Nvme0", 00:28:46.460 "trtype": "tcp", 00:28:46.460 "traddr": "10.0.0.2", 00:28:46.460 "adrfam": "ipv4", 00:28:46.460 "trsvcid": "4420", 00:28:46.460 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:46.460 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:46.460 "hdgst": false, 00:28:46.460 "ddgst": false 00:28:46.460 }, 00:28:46.460 "method": "bdev_nvme_attach_controller" 00:28:46.460 }' 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:46.460 18:23:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:46.717 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:46.718 ... 00:28:46.718 fio-3.35 00:28:46.718 Starting 3 threads 00:28:46.718 EAL: No free 2048 kB hugepages reported on node 1 00:28:53.274 00:28:53.274 filename0: (groupid=0, jobs=1): err= 0: pid=3582435: Wed Jul 24 18:23:45 2024 00:28:53.274 read: IOPS=321, BW=40.2MiB/s (42.2MB/s)(201MiB/5004msec) 00:28:53.274 slat (nsec): min=6090, max=46010, avg=22164.97, stdev=9816.33 00:28:53.274 clat (usec): min=3333, max=51601, avg=9298.03, stdev=7567.87 00:28:53.274 lat (usec): min=3341, max=51622, avg=9320.20, stdev=7568.35 00:28:53.274 clat percentiles (usec): 00:28:53.274 | 1.00th=[ 3720], 5.00th=[ 3884], 10.00th=[ 4490], 20.00th=[ 6128], 00:28:53.274 | 30.00th=[ 7373], 40.00th=[ 7963], 50.00th=[ 8455], 60.00th=[ 8848], 00:28:53.274 | 70.00th=[ 9241], 80.00th=[ 9634], 90.00th=[10421], 95.00th=[11469], 00:28:53.274 | 99.00th=[49021], 99.50th=[50070], 99.90th=[50594], 99.95th=[51643], 00:28:53.274 | 99.99th=[51643] 00:28:53.274 bw ( KiB/s): min=35328, max=44032, per=34.05%, avg=40675.56, stdev=3375.52, samples=9 00:28:53.274 iops : min= 276, max= 344, avg=317.78, stdev=26.37, samples=9 00:28:53.274 lat (msec) : 4=5.90%, 10=80.19%, 20=10.56%, 50=2.80%, 100=0.56% 00:28:53.274 cpu : usr=96.06%, sys=3.58%, ctx=13, majf=0, minf=123 00:28:53.274 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.274 issued rwts: total=1610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.274 filename0: (groupid=0, jobs=1): err= 0: pid=3582436: Wed Jul 24 18:23:45 2024 00:28:53.274 read: IOPS=299, BW=37.4MiB/s (39.2MB/s)(187MiB/5003msec) 00:28:53.274 slat (nsec): min=6055, max=46477, avg=15551.34, stdev=6636.77 00:28:53.274 clat (usec): min=3391, max=53077, avg=10012.69, stdev=7872.35 00:28:53.274 lat (usec): min=3400, max=53102, avg=10028.24, stdev=7872.63 00:28:53.274 clat percentiles (usec): 00:28:53.274 | 1.00th=[ 3818], 5.00th=[ 4948], 10.00th=[ 5997], 20.00th=[ 6783], 00:28:53.274 | 30.00th=[ 7767], 40.00th=[ 8291], 50.00th=[ 8717], 60.00th=[ 9241], 00:28:53.274 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11207], 95.00th=[12780], 00:28:53.274 | 99.00th=[49546], 99.50th=[50070], 99.90th=[53216], 99.95th=[53216], 00:28:53.274 | 99.99th=[53216] 00:28:53.274 bw ( KiB/s): min=32768, max=41728, per=32.04%, avg=38265.11, stdev=3318.45, samples=9 00:28:53.274 iops : min= 256, max= 326, avg=298.89, stdev=26.02, samples=9 00:28:53.274 lat (msec) : 4=2.07%, 10=74.73%, 20=19.39%, 50=3.01%, 100=0.80% 00:28:53.274 cpu : usr=97.30%, sys=2.34%, ctx=23, majf=0, minf=127 00:28:53.274 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.274 issued rwts: total=1496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.274 filename0: (groupid=0, jobs=1): err= 0: pid=3582437: Wed Jul 24 18:23:45 2024 00:28:53.274 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(196MiB/5011msec) 00:28:53.274 slat (nsec): min=6039, max=46553, avg=15627.09, stdev=6896.29 00:28:53.274 clat (usec): min=3545, max=51747, avg=9556.14, stdev=7333.02 00:28:53.274 lat (usec): min=3553, max=51760, avg=9571.77, stdev=7332.89 00:28:53.274 clat percentiles (usec): 00:28:53.274 | 1.00th=[ 3785], 5.00th=[ 4555], 10.00th=[ 5735], 20.00th=[ 6587], 00:28:53.274 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 8848], 00:28:53.274 | 70.00th=[ 9372], 80.00th=[10028], 90.00th=[10814], 95.00th=[11863], 00:28:53.274 | 99.00th=[49546], 99.50th=[50594], 99.90th=[51643], 99.95th=[51643], 00:28:53.274 | 99.99th=[51643] 00:28:53.274 bw ( KiB/s): min=27136, max=45568, per=33.59%, avg=40115.20, stdev=5795.20, samples=10 00:28:53.274 iops : min= 212, max= 356, avg=313.40, stdev=45.27, samples=10 00:28:53.274 lat (msec) : 4=2.10%, 10=77.90%, 20=16.75%, 50=2.55%, 100=0.70% 00:28:53.274 cpu : usr=97.03%, sys=2.61%, ctx=11, majf=0, minf=94 00:28:53.274 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:53.274 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.274 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:53.274 issued rwts: total=1570,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:53.274 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:53.274 00:28:53.274 Run status group 0 (all jobs): 00:28:53.274 READ: bw=117MiB/s (122MB/s), 37.4MiB/s-40.2MiB/s (39.2MB/s-42.2MB/s), io=585MiB (613MB), run=5003-5011msec 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.274 bdev_null0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.274 [2024-07-24 18:23:45.383680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.274 bdev_null1 00:28:53.274 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 bdev_null2 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.275 { 00:28:53.275 "params": { 00:28:53.275 "name": "Nvme$subsystem", 00:28:53.275 "trtype": "$TEST_TRANSPORT", 00:28:53.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.275 "adrfam": "ipv4", 00:28:53.275 "trsvcid": "$NVMF_PORT", 00:28:53.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.275 "hdgst": ${hdgst:-false}, 00:28:53.275 "ddgst": ${ddgst:-false} 00:28:53.275 }, 00:28:53.275 "method": "bdev_nvme_attach_controller" 00:28:53.275 } 00:28:53.275 EOF 00:28:53.275 )") 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.275 { 00:28:53.275 "params": { 00:28:53.275 "name": "Nvme$subsystem", 00:28:53.275 "trtype": "$TEST_TRANSPORT", 00:28:53.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.275 "adrfam": "ipv4", 00:28:53.275 "trsvcid": "$NVMF_PORT", 00:28:53.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.275 "hdgst": ${hdgst:-false}, 00:28:53.275 "ddgst": ${ddgst:-false} 00:28:53.275 }, 00:28:53.275 "method": "bdev_nvme_attach_controller" 00:28:53.275 } 00:28:53.275 EOF 00:28:53.275 )") 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:53.275 { 00:28:53.275 "params": { 00:28:53.275 "name": "Nvme$subsystem", 00:28:53.275 "trtype": "$TEST_TRANSPORT", 00:28:53.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:53.275 "adrfam": "ipv4", 00:28:53.275 "trsvcid": "$NVMF_PORT", 00:28:53.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:53.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:53.275 "hdgst": ${hdgst:-false}, 00:28:53.275 "ddgst": ${ddgst:-false} 00:28:53.275 }, 00:28:53.275 "method": "bdev_nvme_attach_controller" 00:28:53.275 } 00:28:53.275 EOF 00:28:53.275 )") 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:53.275 "params": { 00:28:53.275 "name": "Nvme0", 00:28:53.275 "trtype": "tcp", 00:28:53.275 "traddr": "10.0.0.2", 00:28:53.275 "adrfam": "ipv4", 00:28:53.275 "trsvcid": "4420", 00:28:53.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:53.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:53.275 "hdgst": false, 00:28:53.275 "ddgst": false 00:28:53.275 }, 00:28:53.275 "method": "bdev_nvme_attach_controller" 00:28:53.275 },{ 00:28:53.275 "params": { 00:28:53.275 "name": "Nvme1", 00:28:53.275 "trtype": "tcp", 00:28:53.275 "traddr": "10.0.0.2", 00:28:53.275 "adrfam": "ipv4", 00:28:53.275 "trsvcid": "4420", 00:28:53.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:53.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:53.275 "hdgst": false, 00:28:53.275 "ddgst": false 00:28:53.275 }, 00:28:53.275 "method": "bdev_nvme_attach_controller" 00:28:53.275 },{ 00:28:53.275 "params": { 00:28:53.275 "name": "Nvme2", 00:28:53.275 "trtype": "tcp", 00:28:53.275 "traddr": "10.0.0.2", 00:28:53.275 "adrfam": "ipv4", 00:28:53.275 "trsvcid": "4420", 00:28:53.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:53.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:53.275 "hdgst": false, 00:28:53.275 "ddgst": false 00:28:53.275 }, 00:28:53.275 "method": "bdev_nvme_attach_controller" 00:28:53.275 }' 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:53.275 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:53.276 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:53.276 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:53.276 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:53.276 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:53.276 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:53.276 18:23:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:53.276 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:53.276 ... 00:28:53.276 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:53.276 ... 00:28:53.276 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:53.276 ... 00:28:53.276 fio-3.35 00:28:53.276 Starting 24 threads 00:28:53.276 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.499 00:29:05.499 filename0: (groupid=0, jobs=1): err= 0: pid=3583488: Wed Jul 24 18:23:56 2024 00:29:05.499 read: IOPS=536, BW=2147KiB/s (2198kB/s)(21.0MiB/10018msec) 00:29:05.499 slat (nsec): min=8111, max=98690, avg=33476.68, stdev=13916.20 00:29:05.499 clat (usec): min=12502, max=31284, avg=29510.39, stdev=1103.69 00:29:05.499 lat (usec): min=12511, max=31361, avg=29543.86, stdev=1104.84 00:29:05.499 clat percentiles (usec): 00:29:05.499 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.499 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.499 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.499 | 99.00th=[30278], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:29:05.499 | 99.99th=[31327] 00:29:05.499 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2144.00, stdev=56.87, samples=20 00:29:05.499 iops : min= 512, max= 544, avg=536.00, stdev=14.22, samples=20 00:29:05.499 lat (msec) : 20=0.60%, 50=99.40% 00:29:05.499 cpu : usr=98.82%, sys=0.76%, ctx=9, majf=0, minf=35 00:29:05.499 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.499 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.499 filename0: (groupid=0, jobs=1): err= 0: pid=3583489: Wed Jul 24 18:23:56 2024 00:29:05.499 read: IOPS=534, BW=2137KiB/s (2189kB/s)(20.9MiB/10001msec) 00:29:05.499 slat (nsec): min=8623, max=92510, avg=33797.17, stdev=14179.66 00:29:05.499 clat (usec): min=28231, max=47071, avg=29631.90, stdev=979.44 00:29:05.499 lat (usec): min=28294, max=47087, avg=29665.70, stdev=978.53 00:29:05.499 clat percentiles (usec): 00:29:05.499 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.499 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.499 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.499 | 99.00th=[30278], 99.50th=[31065], 99.90th=[46924], 99.95th=[46924], 00:29:05.499 | 99.99th=[46924] 00:29:05.499 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:29:05.499 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:29:05.499 lat (msec) : 50=100.00% 00:29:05.499 cpu : usr=98.80%, sys=0.80%, ctx=14, majf=0, minf=31 00:29:05.499 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.499 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.499 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.499 filename0: (groupid=0, jobs=1): err= 0: pid=3583490: Wed Jul 24 18:23:56 2024 00:29:05.499 read: IOPS=535, BW=2142KiB/s (2194kB/s)(20.9MiB/10005msec) 00:29:05.499 slat (usec): min=7, max=114, avg=37.70, stdev=17.41 00:29:05.499 clat (usec): min=11947, max=63175, avg=29517.99, stdev=1744.34 00:29:05.499 lat (usec): min=11962, max=63204, avg=29555.69, stdev=1744.66 00:29:05.499 clat percentiles (usec): 00:29:05.499 | 1.00th=[28705], 5.00th=[28967], 10.00th=[29230], 20.00th=[29230], 00:29:05.499 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29492], 00:29:05.499 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.499 | 99.00th=[30540], 99.50th=[35390], 99.90th=[49546], 99.95th=[50070], 00:29:05.499 | 99.99th=[63177] 00:29:05.499 bw ( KiB/s): min= 2032, max= 2176, per=4.16%, avg=2134.74, stdev=62.50, samples=19 00:29:05.499 iops : min= 508, max= 544, avg=533.68, stdev=15.62, samples=19 00:29:05.499 lat (msec) : 20=0.60%, 50=99.37%, 100=0.04% 00:29:05.499 cpu : usr=98.77%, sys=0.82%, ctx=16, majf=0, minf=50 00:29:05.499 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.499 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.499 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.499 issued rwts: total=5358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.499 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.499 filename0: (groupid=0, jobs=1): err= 0: pid=3583491: Wed Jul 24 18:23:56 2024 00:29:05.499 read: IOPS=533, BW=2135KiB/s (2186kB/s)(20.9MiB/10012msec) 00:29:05.499 slat (nsec): min=4700, max=42804, avg=18591.94, stdev=7004.77 00:29:05.499 clat (usec): min=15719, max=50283, avg=29828.36, stdev=1293.16 00:29:05.499 lat (usec): min=15728, max=50297, avg=29846.95, stdev=1292.30 00:29:05.499 clat percentiles (usec): 00:29:05.499 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29492], 20.00th=[29754], 00:29:05.499 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.500 | 99.00th=[30540], 99.50th=[31327], 99.90th=[50070], 99.95th=[50070], 00:29:05.500 | 99.99th=[50070] 00:29:05.500 bw ( KiB/s): min= 2032, max= 2176, per=4.16%, avg=2134.74, stdev=53.95, samples=19 00:29:05.500 iops : min= 508, max= 544, avg=533.68, stdev=13.49, samples=19 00:29:05.500 lat (msec) : 20=0.07%, 50=99.63%, 100=0.30% 00:29:05.500 cpu : usr=98.90%, sys=0.70%, ctx=11, majf=0, minf=49 00:29:05.500 IO depths : 1=0.2%, 2=6.5%, 4=25.0%, 8=56.0%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:05.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.500 filename0: (groupid=0, jobs=1): err= 0: pid=3583492: Wed Jul 24 18:23:56 2024 00:29:05.500 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10005msec) 00:29:05.500 slat (nsec): min=5777, max=44112, avg=21631.38, stdev=6162.20 00:29:05.500 clat (usec): min=15586, max=69150, avg=29768.91, stdev=1585.66 00:29:05.500 lat (usec): min=15601, max=69164, avg=29790.54, stdev=1585.28 00:29:05.500 clat percentiles (usec): 00:29:05.500 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.500 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.500 | 99.00th=[30278], 99.50th=[31327], 99.90th=[50070], 99.95th=[50070], 00:29:05.500 | 99.99th=[68682] 00:29:05.500 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:29:05.500 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:29:05.500 lat (msec) : 20=0.22%, 50=99.48%, 100=0.30% 00:29:05.500 cpu : usr=98.66%, sys=0.93%, ctx=14, majf=0, minf=61 00:29:05.500 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.500 filename0: (groupid=0, jobs=1): err= 0: pid=3583493: Wed Jul 24 18:23:56 2024 00:29:05.500 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10005msec) 00:29:05.500 slat (nsec): min=5227, max=45674, avg=22238.89, stdev=6139.47 00:29:05.500 clat (usec): min=29203, max=50147, avg=29759.19, stdev=1128.80 00:29:05.500 lat (usec): min=29220, max=50163, avg=29781.43, stdev=1128.07 00:29:05.500 clat percentiles (usec): 00:29:05.500 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.500 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.500 | 99.00th=[30278], 99.50th=[31327], 99.90th=[50070], 99.95th=[50070], 00:29:05.500 | 99.99th=[50070] 00:29:05.500 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:29:05.500 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:29:05.500 lat (msec) : 50=99.70%, 100=0.30% 00:29:05.500 cpu : usr=98.81%, sys=0.79%, ctx=12, majf=0, minf=59 00:29:05.500 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.500 filename0: (groupid=0, jobs=1): err= 0: pid=3583494: Wed Jul 24 18:23:56 2024 00:29:05.500 read: IOPS=541, BW=2164KiB/s (2216kB/s)(21.2MiB/10024msec) 00:29:05.500 slat (usec): min=3, max=100, avg=32.91, stdev=14.76 00:29:05.500 clat (usec): min=2572, max=35318, avg=29286.30, stdev=2653.25 00:29:05.500 lat (usec): min=2581, max=35357, avg=29319.21, stdev=2655.64 00:29:05.500 clat percentiles (usec): 00:29:05.500 | 1.00th=[12518], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.500 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.500 | 99.00th=[30278], 99.50th=[30540], 99.90th=[31327], 99.95th=[31327], 00:29:05.500 | 99.99th=[35390] 00:29:05.500 bw ( KiB/s): min= 2048, max= 2560, per=4.21%, avg=2163.20, stdev=109.09, samples=20 00:29:05.500 iops : min= 512, max= 640, avg=540.80, stdev=27.27, samples=20 00:29:05.500 lat (msec) : 4=0.42%, 10=0.46%, 20=0.59%, 50=98.53% 00:29:05.500 cpu : usr=98.80%, sys=0.80%, ctx=15, majf=0, minf=49 00:29:05.500 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.500 filename0: (groupid=0, jobs=1): err= 0: pid=3583495: Wed Jul 24 18:23:56 2024 00:29:05.500 read: IOPS=536, BW=2145KiB/s (2197kB/s)(21.0MiB/10023msec) 00:29:05.500 slat (usec): min=7, max=100, avg=33.42, stdev=15.25 00:29:05.500 clat (usec): min=12639, max=36537, avg=29542.93, stdev=1088.22 00:29:05.500 lat (usec): min=12660, max=36559, avg=29576.35, stdev=1088.36 00:29:05.500 clat percentiles (usec): 00:29:05.500 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.500 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.500 | 99.00th=[30540], 99.50th=[31065], 99.90th=[35914], 99.95th=[35914], 00:29:05.500 | 99.99th=[36439] 00:29:05.500 bw ( KiB/s): min= 2048, max= 2180, per=4.17%, avg=2144.20, stdev=56.99, samples=20 00:29:05.500 iops : min= 512, max= 545, avg=536.05, stdev=14.25, samples=20 00:29:05.500 lat (msec) : 20=0.60%, 50=99.40% 00:29:05.500 cpu : usr=98.19%, sys=1.41%, ctx=35, majf=0, minf=65 00:29:05.500 IO depths : 1=6.0%, 2=12.3%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:05.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.500 filename1: (groupid=0, jobs=1): err= 0: pid=3583496: Wed Jul 24 18:23:56 2024 00:29:05.500 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10005msec) 00:29:05.500 slat (nsec): min=6404, max=97261, avg=30568.77, stdev=14888.51 00:29:05.500 clat (usec): min=5626, max=49950, avg=29607.85, stdev=1685.46 00:29:05.500 lat (usec): min=5634, max=49968, avg=29638.42, stdev=1685.47 00:29:05.500 clat percentiles (usec): 00:29:05.500 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29492], 20.00th=[29492], 00:29:05.500 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.500 | 99.00th=[30540], 99.50th=[31327], 99.90th=[50070], 99.95th=[50070], 00:29:05.500 | 99.99th=[50070] 00:29:05.500 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.20, stdev=64.52, samples=20 00:29:05.500 iops : min= 512, max= 576, avg=535.80, stdev=16.13, samples=20 00:29:05.500 lat (msec) : 10=0.04%, 20=0.56%, 50=99.40% 00:29:05.500 cpu : usr=98.81%, sys=0.80%, ctx=18, majf=0, minf=70 00:29:05.500 IO depths : 1=1.1%, 2=7.4%, 4=25.0%, 8=55.1%, 16=11.4%, 32=0.0%, >=64=0.0% 00:29:05.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.500 filename1: (groupid=0, jobs=1): err= 0: pid=3583497: Wed Jul 24 18:23:56 2024 00:29:05.500 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10007msec) 00:29:05.500 slat (nsec): min=5784, max=39923, avg=19535.86, stdev=5025.63 00:29:05.500 clat (usec): min=13650, max=46006, avg=29696.18, stdev=1322.94 00:29:05.500 lat (usec): min=13665, max=46023, avg=29715.72, stdev=1322.66 00:29:05.500 clat percentiles (usec): 00:29:05.500 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.500 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.500 | 99.00th=[30540], 99.50th=[31327], 99.90th=[45876], 99.95th=[45876], 00:29:05.500 | 99.99th=[45876] 00:29:05.500 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:29:05.500 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:29:05.500 lat (msec) : 20=0.30%, 50=99.70% 00:29:05.500 cpu : usr=98.72%, sys=0.89%, ctx=13, majf=0, minf=55 00:29:05.500 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.500 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.500 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.500 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.500 filename1: (groupid=0, jobs=1): err= 0: pid=3583498: Wed Jul 24 18:23:56 2024 00:29:05.500 read: IOPS=535, BW=2141KiB/s (2192kB/s)(20.9MiB/10014msec) 00:29:05.500 slat (nsec): min=7385, max=38106, avg=16790.06, stdev=5574.06 00:29:05.500 clat (usec): min=22172, max=39782, avg=29753.34, stdev=725.56 00:29:05.500 lat (usec): min=22201, max=39820, avg=29770.13, stdev=725.24 00:29:05.500 clat percentiles (usec): 00:29:05.500 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.500 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.500 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.501 | 99.00th=[30540], 99.50th=[31327], 99.90th=[39584], 99.95th=[39584], 00:29:05.501 | 99.99th=[39584] 00:29:05.501 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2137.60, stdev=60.18, samples=20 00:29:05.501 iops : min= 512, max= 544, avg=534.40, stdev=15.05, samples=20 00:29:05.501 lat (msec) : 50=100.00% 00:29:05.501 cpu : usr=98.18%, sys=1.43%, ctx=17, majf=0, minf=52 00:29:05.501 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.501 filename1: (groupid=0, jobs=1): err= 0: pid=3583499: Wed Jul 24 18:23:56 2024 00:29:05.501 read: IOPS=540, BW=2163KiB/s (2215kB/s)(21.1MiB/10006msec) 00:29:05.501 slat (nsec): min=6436, max=98887, avg=29320.72, stdev=14467.34 00:29:05.501 clat (usec): min=12025, max=50603, avg=29313.33, stdev=2618.90 00:29:05.501 lat (usec): min=12041, max=50620, avg=29342.66, stdev=2620.80 00:29:05.501 clat percentiles (usec): 00:29:05.501 | 1.00th=[18482], 5.00th=[26084], 10.00th=[29230], 20.00th=[29492], 00:29:05.501 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.501 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.501 | 99.00th=[36963], 99.50th=[42730], 99.90th=[50594], 99.95th=[50594], 00:29:05.501 | 99.99th=[50594] 00:29:05.501 bw ( KiB/s): min= 1923, max= 2368, per=4.20%, avg=2156.79, stdev=104.49, samples=19 00:29:05.501 iops : min= 480, max= 592, avg=539.16, stdev=26.22, samples=19 00:29:05.501 lat (msec) : 20=2.96%, 50=96.75%, 100=0.30% 00:29:05.501 cpu : usr=98.82%, sys=0.77%, ctx=14, majf=0, minf=36 00:29:05.501 IO depths : 1=5.4%, 2=11.0%, 4=22.6%, 8=53.6%, 16=7.4%, 32=0.0%, >=64=0.0% 00:29:05.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 issued rwts: total=5410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.501 filename1: (groupid=0, jobs=1): err= 0: pid=3583500: Wed Jul 24 18:23:56 2024 00:29:05.501 read: IOPS=535, BW=2140KiB/s (2192kB/s)(20.9MiB/10017msec) 00:29:05.501 slat (nsec): min=7562, max=46637, avg=21950.27, stdev=6171.31 00:29:05.501 clat (usec): min=15608, max=48563, avg=29685.90, stdev=1173.45 00:29:05.501 lat (usec): min=15623, max=48579, avg=29707.86, stdev=1173.52 00:29:05.501 clat percentiles (usec): 00:29:05.501 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.501 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.501 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.501 | 99.00th=[30278], 99.50th=[31327], 99.90th=[43254], 99.95th=[48497], 00:29:05.501 | 99.99th=[48497] 00:29:05.501 bw ( KiB/s): min= 2048, max= 2192, per=4.16%, avg=2137.15, stdev=60.14, samples=20 00:29:05.501 iops : min= 512, max= 548, avg=534.25, stdev=15.02, samples=20 00:29:05.501 lat (msec) : 20=0.37%, 50=99.63% 00:29:05.501 cpu : usr=98.87%, sys=0.74%, ctx=19, majf=0, minf=52 00:29:05.501 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:29:05.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.501 filename1: (groupid=0, jobs=1): err= 0: pid=3583501: Wed Jul 24 18:23:56 2024 00:29:05.501 read: IOPS=534, BW=2138KiB/s (2189kB/s)(20.9MiB/10018msec) 00:29:05.501 slat (nsec): min=4077, max=97868, avg=32841.59, stdev=13737.32 00:29:05.501 clat (usec): min=17341, max=45992, avg=29614.13, stdev=1063.63 00:29:05.501 lat (usec): min=17362, max=46005, avg=29646.97, stdev=1063.09 00:29:05.501 clat percentiles (usec): 00:29:05.501 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.501 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.501 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.501 | 99.00th=[30278], 99.50th=[31065], 99.90th=[45876], 99.95th=[45876], 00:29:05.501 | 99.99th=[45876] 00:29:05.501 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2137.60, stdev=60.18, samples=20 00:29:05.501 iops : min= 512, max= 544, avg=534.40, stdev=15.05, samples=20 00:29:05.501 lat (msec) : 20=0.19%, 50=99.81% 00:29:05.501 cpu : usr=98.94%, sys=0.67%, ctx=9, majf=0, minf=48 00:29:05.501 IO depths : 1=6.3%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 issued rwts: total=5354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.501 filename1: (groupid=0, jobs=1): err= 0: pid=3583502: Wed Jul 24 18:23:56 2024 00:29:05.501 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10007msec) 00:29:05.501 slat (nsec): min=6493, max=99940, avg=30738.34, stdev=13333.24 00:29:05.501 clat (usec): min=11976, max=51794, avg=29571.48, stdev=1691.15 00:29:05.501 lat (usec): min=11992, max=51811, avg=29602.22, stdev=1691.38 00:29:05.501 clat percentiles (usec): 00:29:05.501 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.501 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.501 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.501 | 99.00th=[30278], 99.50th=[31065], 99.90th=[51643], 99.95th=[51643], 00:29:05.501 | 99.99th=[51643] 00:29:05.501 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2135.58, stdev=74.55, samples=19 00:29:05.501 iops : min= 480, max= 544, avg=533.89, stdev=18.64, samples=19 00:29:05.501 lat (msec) : 20=0.60%, 50=99.10%, 100=0.30% 00:29:05.501 cpu : usr=98.47%, sys=1.14%, ctx=15, majf=0, minf=48 00:29:05.501 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.501 filename1: (groupid=0, jobs=1): err= 0: pid=3583503: Wed Jul 24 18:23:56 2024 00:29:05.501 read: IOPS=534, BW=2136KiB/s (2188kB/s)(20.9MiB/10006msec) 00:29:05.501 slat (nsec): min=3820, max=42982, avg=22118.97, stdev=6319.02 00:29:05.501 clat (usec): min=15876, max=69395, avg=29768.32, stdev=1372.01 00:29:05.501 lat (usec): min=15885, max=69407, avg=29790.44, stdev=1371.07 00:29:05.501 clat percentiles (usec): 00:29:05.501 | 1.00th=[29492], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.501 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.501 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.501 | 99.00th=[30278], 99.50th=[31327], 99.90th=[51119], 99.95th=[51119], 00:29:05.501 | 99.99th=[69731] 00:29:05.501 bw ( KiB/s): min= 1923, max= 2176, per=4.15%, avg=2131.35, stdev=74.71, samples=20 00:29:05.501 iops : min= 480, max= 544, avg=532.80, stdev=18.79, samples=20 00:29:05.501 lat (msec) : 20=0.04%, 50=99.66%, 100=0.30% 00:29:05.501 cpu : usr=98.70%, sys=0.91%, ctx=12, majf=0, minf=58 00:29:05.501 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.501 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.501 filename2: (groupid=0, jobs=1): err= 0: pid=3583504: Wed Jul 24 18:23:56 2024 00:29:05.501 read: IOPS=535, BW=2143KiB/s (2194kB/s)(20.9MiB/10007msec) 00:29:05.501 slat (nsec): min=5591, max=39008, avg=18862.94, stdev=5494.83 00:29:05.501 clat (usec): min=13665, max=45770, avg=29694.75, stdev=1314.74 00:29:05.501 lat (usec): min=13673, max=45785, avg=29713.61, stdev=1314.54 00:29:05.501 clat percentiles (usec): 00:29:05.501 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.501 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.501 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.501 | 99.00th=[30540], 99.50th=[31327], 99.90th=[45876], 99.95th=[45876], 00:29:05.501 | 99.99th=[45876] 00:29:05.501 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:29:05.501 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:29:05.501 lat (msec) : 20=0.30%, 50=99.70% 00:29:05.501 cpu : usr=98.72%, sys=0.89%, ctx=14, majf=0, minf=35 00:29:05.501 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.501 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.501 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 filename2: (groupid=0, jobs=1): err= 0: pid=3583505: Wed Jul 24 18:23:56 2024 00:29:05.502 read: IOPS=535, BW=2141KiB/s (2192kB/s)(20.9MiB/10014msec) 00:29:05.502 slat (nsec): min=7589, max=41393, avg=18040.43, stdev=5434.36 00:29:05.502 clat (usec): min=20715, max=45951, avg=29740.77, stdev=777.07 00:29:05.502 lat (usec): min=20725, max=45989, avg=29758.81, stdev=776.75 00:29:05.502 clat percentiles (usec): 00:29:05.502 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.502 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.502 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.502 | 99.00th=[30540], 99.50th=[31327], 99.90th=[39584], 99.95th=[39584], 00:29:05.502 | 99.99th=[45876] 00:29:05.502 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2137.60, stdev=60.18, samples=20 00:29:05.502 iops : min= 512, max= 544, avg=534.40, stdev=15.05, samples=20 00:29:05.502 lat (msec) : 50=100.00% 00:29:05.502 cpu : usr=98.75%, sys=0.85%, ctx=13, majf=0, minf=35 00:29:05.502 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 filename2: (groupid=0, jobs=1): err= 0: pid=3583506: Wed Jul 24 18:23:56 2024 00:29:05.502 read: IOPS=534, BW=2137KiB/s (2189kB/s)(20.9MiB/10001msec) 00:29:05.502 slat (nsec): min=5018, max=94644, avg=33091.54, stdev=13624.52 00:29:05.502 clat (usec): min=23479, max=52151, avg=29640.24, stdev=995.76 00:29:05.502 lat (usec): min=23488, max=52166, avg=29673.33, stdev=994.89 00:29:05.502 clat percentiles (usec): 00:29:05.502 | 1.00th=[29230], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.502 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.502 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.502 | 99.00th=[30278], 99.50th=[31065], 99.90th=[46400], 99.95th=[46400], 00:29:05.502 | 99.99th=[52167] 00:29:05.502 bw ( KiB/s): min= 2048, max= 2176, per=4.16%, avg=2135.58, stdev=61.13, samples=19 00:29:05.502 iops : min= 512, max= 544, avg=533.89, stdev=15.28, samples=19 00:29:05.502 lat (msec) : 50=99.96%, 100=0.04% 00:29:05.502 cpu : usr=98.80%, sys=0.81%, ctx=13, majf=0, minf=52 00:29:05.502 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 filename2: (groupid=0, jobs=1): err= 0: pid=3583507: Wed Jul 24 18:23:56 2024 00:29:05.502 read: IOPS=536, BW=2146KiB/s (2198kB/s)(21.0MiB/10020msec) 00:29:05.502 slat (nsec): min=7805, max=94852, avg=33055.04, stdev=14366.64 00:29:05.502 clat (usec): min=12523, max=35958, avg=29532.01, stdev=1131.18 00:29:05.502 lat (usec): min=12538, max=35983, avg=29565.06, stdev=1131.69 00:29:05.502 clat percentiles (usec): 00:29:05.502 | 1.00th=[28967], 5.00th=[29230], 10.00th=[29230], 20.00th=[29492], 00:29:05.502 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29492], 60.00th=[29754], 00:29:05.502 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.502 | 99.00th=[30278], 99.50th=[31065], 99.90th=[32637], 99.95th=[32900], 00:29:05.502 | 99.99th=[35914] 00:29:05.502 bw ( KiB/s): min= 2048, max= 2176, per=4.17%, avg=2144.00, stdev=56.87, samples=20 00:29:05.502 iops : min= 512, max= 544, avg=536.00, stdev=14.22, samples=20 00:29:05.502 lat (msec) : 20=0.60%, 50=99.40% 00:29:05.502 cpu : usr=98.85%, sys=0.77%, ctx=12, majf=0, minf=40 00:29:05.502 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 filename2: (groupid=0, jobs=1): err= 0: pid=3583508: Wed Jul 24 18:23:56 2024 00:29:05.502 read: IOPS=535, BW=2141KiB/s (2192kB/s)(20.9MiB/10016msec) 00:29:05.502 slat (nsec): min=7338, max=41677, avg=19350.56, stdev=5037.64 00:29:05.502 clat (usec): min=13655, max=60550, avg=29715.71, stdev=1613.22 00:29:05.502 lat (usec): min=13671, max=60570, avg=29735.06, stdev=1612.99 00:29:05.502 clat percentiles (usec): 00:29:05.502 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.502 | 30.00th=[29492], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.502 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.502 | 99.00th=[30540], 99.50th=[31327], 99.90th=[51643], 99.95th=[52167], 00:29:05.502 | 99.99th=[60556] 00:29:05.502 bw ( KiB/s): min= 1920, max= 2176, per=4.16%, avg=2137.60, stdev=73.12, samples=20 00:29:05.502 iops : min= 480, max= 544, avg=534.40, stdev=18.28, samples=20 00:29:05.502 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:29:05.502 cpu : usr=98.92%, sys=0.70%, ctx=9, majf=0, minf=35 00:29:05.502 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:29:05.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 filename2: (groupid=0, jobs=1): err= 0: pid=3583509: Wed Jul 24 18:23:56 2024 00:29:05.502 read: IOPS=540, BW=2163KiB/s (2215kB/s)(21.2MiB/10029msec) 00:29:05.502 slat (nsec): min=7556, max=65845, avg=20219.58, stdev=10931.10 00:29:05.502 clat (usec): min=2578, max=36097, avg=29429.79, stdev=2718.77 00:29:05.502 lat (usec): min=2588, max=36123, avg=29450.01, stdev=2719.44 00:29:05.502 clat percentiles (usec): 00:29:05.502 | 1.00th=[12649], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.502 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.502 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.502 | 99.00th=[30540], 99.50th=[31327], 99.90th=[35914], 99.95th=[35914], 00:29:05.502 | 99.99th=[35914] 00:29:05.502 bw ( KiB/s): min= 2048, max= 2565, per=4.21%, avg=2163.45, stdev=106.56, samples=20 00:29:05.502 iops : min= 512, max= 641, avg=540.85, stdev=26.59, samples=20 00:29:05.502 lat (msec) : 4=0.59%, 10=0.29%, 20=0.59%, 50=98.53% 00:29:05.502 cpu : usr=98.81%, sys=0.74%, ctx=74, majf=0, minf=85 00:29:05.502 IO depths : 1=1.2%, 2=7.3%, 4=24.7%, 8=55.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:29:05.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 issued rwts: total=5424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 filename2: (groupid=0, jobs=1): err= 0: pid=3583510: Wed Jul 24 18:23:56 2024 00:29:05.502 read: IOPS=535, BW=2142KiB/s (2194kB/s)(20.9MiB/10004msec) 00:29:05.502 slat (nsec): min=7317, max=88065, avg=21394.38, stdev=8427.86 00:29:05.502 clat (usec): min=7975, max=55648, avg=29707.10, stdev=2003.86 00:29:05.502 lat (usec): min=7990, max=55694, avg=29728.49, stdev=2004.44 00:29:05.502 clat percentiles (usec): 00:29:05.502 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29492], 20.00th=[29492], 00:29:05.502 | 30.00th=[29754], 40.00th=[29754], 50.00th=[29754], 60.00th=[29754], 00:29:05.502 | 70.00th=[29754], 80.00th=[29754], 90.00th=[30016], 95.00th=[30016], 00:29:05.502 | 99.00th=[30278], 99.50th=[31327], 99.90th=[55313], 99.95th=[55837], 00:29:05.502 | 99.99th=[55837] 00:29:05.502 bw ( KiB/s): min= 1923, max= 2192, per=4.16%, avg=2137.75, stdev=67.46, samples=20 00:29:05.502 iops : min= 480, max= 548, avg=534.40, stdev=16.99, samples=20 00:29:05.502 lat (msec) : 10=0.26%, 20=0.30%, 50=99.14%, 100=0.30% 00:29:05.502 cpu : usr=98.69%, sys=0.91%, ctx=12, majf=0, minf=49 00:29:05.502 IO depths : 1=0.2%, 2=6.4%, 4=25.0%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:05.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 issued rwts: total=5358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 filename2: (groupid=0, jobs=1): err= 0: pid=3583511: Wed Jul 24 18:23:56 2024 00:29:05.502 read: IOPS=535, BW=2143KiB/s (2195kB/s)(20.9MiB/10004msec) 00:29:05.502 slat (usec): min=7, max=105, avg=28.77, stdev=17.37 00:29:05.502 clat (usec): min=7945, max=56084, avg=29595.12, stdev=2022.29 00:29:05.502 lat (usec): min=7960, max=56130, avg=29623.90, stdev=2022.45 00:29:05.502 clat percentiles (usec): 00:29:05.502 | 1.00th=[28705], 5.00th=[28967], 10.00th=[29230], 20.00th=[29492], 00:29:05.502 | 30.00th=[29492], 40.00th=[29492], 50.00th=[29754], 60.00th=[29754], 00:29:05.502 | 70.00th=[29754], 80.00th=[29754], 90.00th=[29754], 95.00th=[30016], 00:29:05.502 | 99.00th=[30278], 99.50th=[31327], 99.90th=[55837], 99.95th=[55837], 00:29:05.502 | 99.99th=[55837] 00:29:05.502 bw ( KiB/s): min= 1923, max= 2176, per=4.16%, avg=2137.75, stdev=72.65, samples=20 00:29:05.502 iops : min= 480, max= 544, avg=534.40, stdev=18.28, samples=20 00:29:05.502 lat (msec) : 10=0.30%, 20=0.30%, 50=99.10%, 100=0.30% 00:29:05.502 cpu : usr=98.60%, sys=0.99%, ctx=20, majf=0, minf=49 00:29:05.502 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:29:05.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:05.502 issued rwts: total=5360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:05.502 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:05.502 00:29:05.502 Run status group 0 (all jobs): 00:29:05.503 READ: bw=50.2MiB/s (52.6MB/s), 2135KiB/s-2164KiB/s (2186kB/s-2216kB/s), io=503MiB (527MB), run=10001-10029msec 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 bdev_null0 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 [2024-07-24 18:23:57.066298] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 bdev_null1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.503 { 00:29:05.503 "params": { 00:29:05.503 "name": "Nvme$subsystem", 00:29:05.503 "trtype": "$TEST_TRANSPORT", 00:29:05.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.503 "adrfam": "ipv4", 00:29:05.503 "trsvcid": "$NVMF_PORT", 00:29:05.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.503 "hdgst": ${hdgst:-false}, 00:29:05.503 "ddgst": ${ddgst:-false} 00:29:05.503 }, 00:29:05.503 "method": "bdev_nvme_attach_controller" 00:29:05.503 } 00:29:05.503 EOF 00:29:05.503 )") 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:05.503 18:23:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:05.504 { 00:29:05.504 "params": { 00:29:05.504 "name": "Nvme$subsystem", 00:29:05.504 "trtype": "$TEST_TRANSPORT", 00:29:05.504 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:05.504 "adrfam": "ipv4", 00:29:05.504 "trsvcid": "$NVMF_PORT", 00:29:05.504 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:05.504 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:05.504 "hdgst": ${hdgst:-false}, 00:29:05.504 "ddgst": ${ddgst:-false} 00:29:05.504 }, 00:29:05.504 "method": "bdev_nvme_attach_controller" 00:29:05.504 } 00:29:05.504 EOF 00:29:05.504 )") 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:05.504 "params": { 00:29:05.504 "name": "Nvme0", 00:29:05.504 "trtype": "tcp", 00:29:05.504 "traddr": "10.0.0.2", 00:29:05.504 "adrfam": "ipv4", 00:29:05.504 "trsvcid": "4420", 00:29:05.504 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:05.504 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:05.504 "hdgst": false, 00:29:05.504 "ddgst": false 00:29:05.504 }, 00:29:05.504 "method": "bdev_nvme_attach_controller" 00:29:05.504 },{ 00:29:05.504 "params": { 00:29:05.504 "name": "Nvme1", 00:29:05.504 "trtype": "tcp", 00:29:05.504 "traddr": "10.0.0.2", 00:29:05.504 "adrfam": "ipv4", 00:29:05.504 "trsvcid": "4420", 00:29:05.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:05.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:05.504 "hdgst": false, 00:29:05.504 "ddgst": false 00:29:05.504 }, 00:29:05.504 "method": "bdev_nvme_attach_controller" 00:29:05.504 }' 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:05.504 18:23:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:05.504 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:05.504 ... 00:29:05.504 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:05.504 ... 00:29:05.504 fio-3.35 00:29:05.504 Starting 4 threads 00:29:05.504 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.773 00:29:10.773 filename0: (groupid=0, jobs=1): err= 0: pid=3585464: Wed Jul 24 18:24:03 2024 00:29:10.773 read: IOPS=2891, BW=22.6MiB/s (23.7MB/s)(113MiB/5003msec) 00:29:10.773 slat (nsec): min=6013, max=38716, avg=8840.52, stdev=3017.91 00:29:10.773 clat (usec): min=628, max=5444, avg=2741.36, stdev=525.71 00:29:10.773 lat (usec): min=647, max=5458, avg=2750.20, stdev=525.50 00:29:10.773 clat percentiles (usec): 00:29:10.773 | 1.00th=[ 1647], 5.00th=[ 2040], 10.00th=[ 2180], 20.00th=[ 2376], 00:29:10.773 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2802], 00:29:10.773 | 70.00th=[ 2933], 80.00th=[ 3032], 90.00th=[ 3294], 95.00th=[ 3785], 00:29:10.773 | 99.00th=[ 4490], 99.50th=[ 4686], 99.90th=[ 5014], 99.95th=[ 5145], 00:29:10.773 | 99.99th=[ 5407] 00:29:10.773 bw ( KiB/s): min=22016, max=24640, per=26.92%, avg=23136.00, stdev=917.95, samples=10 00:29:10.773 iops : min= 2752, max= 3080, avg=2892.00, stdev=114.74, samples=10 00:29:10.773 lat (usec) : 750=0.05%, 1000=0.31% 00:29:10.773 lat (msec) : 2=3.44%, 4=92.58%, 10=3.62% 00:29:10.773 cpu : usr=96.16%, sys=3.48%, ctx=12, majf=0, minf=23 00:29:10.773 IO depths : 1=0.1%, 2=4.3%, 4=66.2%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.773 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.773 issued rwts: total=14465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.774 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.774 filename0: (groupid=0, jobs=1): err= 0: pid=3585465: Wed Jul 24 18:24:03 2024 00:29:10.774 read: IOPS=2605, BW=20.4MiB/s (21.3MB/s)(102MiB/5004msec) 00:29:10.774 slat (nsec): min=6009, max=46328, avg=8527.98, stdev=2895.99 00:29:10.774 clat (usec): min=733, max=6510, avg=3044.80, stdev=575.20 00:29:10.774 lat (usec): min=744, max=6521, avg=3053.33, stdev=574.86 00:29:10.774 clat percentiles (usec): 00:29:10.774 | 1.00th=[ 2008], 5.00th=[ 2376], 10.00th=[ 2507], 20.00th=[ 2671], 00:29:10.774 | 30.00th=[ 2769], 40.00th=[ 2835], 50.00th=[ 2933], 60.00th=[ 2999], 00:29:10.774 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3949], 95.00th=[ 4293], 00:29:10.774 | 99.00th=[ 4948], 99.50th=[ 5014], 99.90th=[ 5538], 99.95th=[ 6128], 00:29:10.774 | 99.99th=[ 6521] 00:29:10.774 bw ( KiB/s): min=20136, max=21472, per=24.26%, avg=20853.60, stdev=487.85, samples=10 00:29:10.774 iops : min= 2517, max= 2684, avg=2606.70, stdev=60.98, samples=10 00:29:10.774 lat (usec) : 750=0.01% 00:29:10.774 lat (msec) : 2=0.89%, 4=89.97%, 10=9.13% 00:29:10.774 cpu : usr=96.38%, sys=3.26%, ctx=9, majf=0, minf=40 00:29:10.774 IO depths : 1=0.2%, 2=2.2%, 4=70.6%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.774 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.774 issued rwts: total=13039,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.774 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.774 filename1: (groupid=0, jobs=1): err= 0: pid=3585466: Wed Jul 24 18:24:03 2024 00:29:10.774 read: IOPS=2578, BW=20.1MiB/s (21.1MB/s)(101MiB/5005msec) 00:29:10.774 slat (nsec): min=6019, max=78132, avg=8455.06, stdev=2950.41 00:29:10.774 clat (usec): min=1537, max=7266, avg=3078.55, stdev=576.82 00:29:10.774 lat (usec): min=1543, max=7278, avg=3087.01, stdev=576.50 00:29:10.774 clat percentiles (usec): 00:29:10.774 | 1.00th=[ 2024], 5.00th=[ 2409], 10.00th=[ 2540], 20.00th=[ 2704], 00:29:10.774 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 3032], 00:29:10.774 | 70.00th=[ 3163], 80.00th=[ 3359], 90.00th=[ 3949], 95.00th=[ 4359], 00:29:10.774 | 99.00th=[ 4948], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[ 6521], 00:29:10.774 | 99.99th=[ 7242] 00:29:10.774 bw ( KiB/s): min=19696, max=21168, per=24.01%, avg=20632.00, stdev=471.34, samples=10 00:29:10.774 iops : min= 2462, max= 2646, avg=2579.00, stdev=58.92, samples=10 00:29:10.774 lat (msec) : 2=0.95%, 4=89.67%, 10=9.39% 00:29:10.774 cpu : usr=96.30%, sys=3.34%, ctx=9, majf=0, minf=49 00:29:10.774 IO depths : 1=0.1%, 2=1.4%, 4=71.2%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.774 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.774 issued rwts: total=12903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.774 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.774 filename1: (groupid=0, jobs=1): err= 0: pid=3585467: Wed Jul 24 18:24:03 2024 00:29:10.774 read: IOPS=2670, BW=20.9MiB/s (21.9MB/s)(104MiB/5005msec) 00:29:10.774 slat (usec): min=6, max=186, avg= 8.72, stdev= 3.34 00:29:10.774 clat (usec): min=879, max=6557, avg=2970.12, stdev=569.99 00:29:10.774 lat (usec): min=891, max=6566, avg=2978.84, stdev=569.80 00:29:10.774 clat percentiles (usec): 00:29:10.774 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2409], 20.00th=[ 2573], 00:29:10.774 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2868], 60.00th=[ 2966], 00:29:10.774 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3785], 95.00th=[ 4228], 00:29:10.774 | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 5342], 99.95th=[ 6259], 00:29:10.774 | 99.99th=[ 6521] 00:29:10.774 bw ( KiB/s): min=20560, max=22160, per=24.86%, avg=21369.60, stdev=521.59, samples=10 00:29:10.774 iops : min= 2570, max= 2770, avg=2671.20, stdev=65.20, samples=10 00:29:10.774 lat (usec) : 1000=0.01% 00:29:10.774 lat (msec) : 2=1.02%, 4=91.35%, 10=7.62% 00:29:10.774 cpu : usr=96.32%, sys=3.32%, ctx=10, majf=0, minf=70 00:29:10.774 IO depths : 1=0.1%, 2=2.9%, 4=69.2%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:10.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.774 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:10.774 issued rwts: total=13364,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:10.774 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:10.774 00:29:10.774 Run status group 0 (all jobs): 00:29:10.774 READ: bw=83.9MiB/s (88.0MB/s), 20.1MiB/s-22.6MiB/s (21.1MB/s-23.7MB/s), io=420MiB (440MB), run=5003-5005msec 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.774 00:29:10.774 real 0m24.221s 00:29:10.774 user 4m52.271s 00:29:10.774 sys 0m4.143s 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 ************************************ 00:29:10.774 END TEST fio_dif_rand_params 00:29:10.774 ************************************ 00:29:10.774 18:24:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:10.774 18:24:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:10.774 18:24:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 ************************************ 00:29:10.774 START TEST fio_dif_digest 00:29:10.774 ************************************ 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 bdev_null0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:10.774 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.775 [2024-07-24 18:24:03.515486] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:10.775 { 00:29:10.775 "params": { 00:29:10.775 "name": "Nvme$subsystem", 00:29:10.775 "trtype": "$TEST_TRANSPORT", 00:29:10.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:10.775 "adrfam": "ipv4", 00:29:10.775 "trsvcid": "$NVMF_PORT", 00:29:10.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:10.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:10.775 "hdgst": ${hdgst:-false}, 00:29:10.775 "ddgst": ${ddgst:-false} 00:29:10.775 }, 00:29:10.775 "method": "bdev_nvme_attach_controller" 00:29:10.775 } 00:29:10.775 EOF 00:29:10.775 )") 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:10.775 "params": { 00:29:10.775 "name": "Nvme0", 00:29:10.775 "trtype": "tcp", 00:29:10.775 "traddr": "10.0.0.2", 00:29:10.775 "adrfam": "ipv4", 00:29:10.775 "trsvcid": "4420", 00:29:10.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:10.775 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:10.775 "hdgst": true, 00:29:10.775 "ddgst": true 00:29:10.775 }, 00:29:10.775 "method": "bdev_nvme_attach_controller" 00:29:10.775 }' 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:10.775 18:24:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:11.033 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:11.033 ... 00:29:11.033 fio-3.35 00:29:11.033 Starting 3 threads 00:29:11.033 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.235 00:29:23.235 filename0: (groupid=0, jobs=1): err= 0: pid=3586820: Wed Jul 24 18:24:14 2024 00:29:23.235 read: IOPS=295, BW=37.0MiB/s (38.8MB/s)(372MiB/10049msec) 00:29:23.235 slat (usec): min=6, max=182, avg=11.77, stdev= 3.67 00:29:23.235 clat (usec): min=7394, max=50206, avg=10108.31, stdev=1237.62 00:29:23.235 lat (usec): min=7402, max=50214, avg=10120.08, stdev=1237.40 00:29:23.235 clat percentiles (usec): 00:29:23.235 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9503], 00:29:23.235 | 30.00th=[ 9765], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10290], 00:29:23.235 | 70.00th=[10421], 80.00th=[10683], 90.00th=[10945], 95.00th=[11207], 00:29:23.235 | 99.00th=[11731], 99.50th=[12125], 99.90th=[14353], 99.95th=[48497], 00:29:23.235 | 99.99th=[50070] 00:29:23.235 bw ( KiB/s): min=36864, max=38912, per=35.21%, avg=38041.60, stdev=527.92, samples=20 00:29:23.236 iops : min= 288, max= 304, avg=297.20, stdev= 4.12, samples=20 00:29:23.236 lat (msec) : 10=43.91%, 20=56.02%, 50=0.03%, 100=0.03% 00:29:23.236 cpu : usr=94.22%, sys=5.45%, ctx=47, majf=0, minf=100 00:29:23.236 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:23.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.236 issued rwts: total=2974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.236 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:23.236 filename0: (groupid=0, jobs=1): err= 0: pid=3586821: Wed Jul 24 18:24:14 2024 00:29:23.236 read: IOPS=275, BW=34.4MiB/s (36.1MB/s)(346MiB/10046msec) 00:29:23.236 slat (usec): min=6, max=243, avg=11.71, stdev= 4.88 00:29:23.236 clat (usec): min=8380, max=47396, avg=10871.16, stdev=1196.71 00:29:23.236 lat (usec): min=8392, max=47405, avg=10882.88, stdev=1196.68 00:29:23.236 clat percentiles (usec): 00:29:23.236 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10290], 00:29:23.236 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:29:23.236 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[11994], 00:29:23.236 | 99.00th=[12649], 99.50th=[12911], 99.90th=[14091], 99.95th=[45351], 00:29:23.236 | 99.99th=[47449] 00:29:23.236 bw ( KiB/s): min=34629, max=36096, per=32.74%, avg=35369.85, stdev=375.70, samples=20 00:29:23.236 iops : min= 270, max= 282, avg=276.30, stdev= 2.99, samples=20 00:29:23.236 lat (msec) : 10=10.99%, 20=88.93%, 50=0.07% 00:29:23.236 cpu : usr=94.38%, sys=5.31%, ctx=23, majf=0, minf=146 00:29:23.236 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:23.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.236 issued rwts: total=2765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.236 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:23.236 filename0: (groupid=0, jobs=1): err= 0: pid=3586822: Wed Jul 24 18:24:14 2024 00:29:23.236 read: IOPS=274, BW=34.3MiB/s (35.9MB/s)(343MiB/10005msec) 00:29:23.236 slat (usec): min=6, max=148, avg=11.57, stdev= 3.31 00:29:23.236 clat (usec): min=8184, max=14727, avg=10927.71, stdev=763.62 00:29:23.236 lat (usec): min=8190, max=14741, avg=10939.28, stdev=763.66 00:29:23.236 clat percentiles (usec): 00:29:23.236 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:29:23.236 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10945], 60.00th=[11076], 00:29:23.236 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11863], 95.00th=[12256], 00:29:23.236 | 99.00th=[12911], 99.50th=[13173], 99.90th=[14484], 99.95th=[14484], 00:29:23.236 | 99.99th=[14746] 00:29:23.236 bw ( KiB/s): min=34048, max=36096, per=32.50%, avg=35112.42, stdev=520.91, samples=19 00:29:23.236 iops : min= 266, max= 282, avg=274.32, stdev= 4.07, samples=19 00:29:23.236 lat (msec) : 10=9.66%, 20=90.34% 00:29:23.236 cpu : usr=94.84%, sys=4.83%, ctx=29, majf=0, minf=118 00:29:23.236 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:23.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:23.236 issued rwts: total=2743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:23.236 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:23.236 00:29:23.236 Run status group 0 (all jobs): 00:29:23.236 READ: bw=106MiB/s (111MB/s), 34.3MiB/s-37.0MiB/s (35.9MB/s-38.8MB/s), io=1060MiB (1112MB), run=10005-10049msec 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:23.236 00:29:23.236 real 0m11.152s 00:29:23.236 user 0m34.962s 00:29:23.236 sys 0m1.854s 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:23.236 18:24:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:23.236 ************************************ 00:29:23.236 END TEST fio_dif_digest 00:29:23.236 ************************************ 00:29:23.236 18:24:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:23.236 18:24:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:23.236 rmmod nvme_tcp 00:29:23.236 rmmod nvme_fabrics 00:29:23.236 rmmod nvme_keyring 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3578126 ']' 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3578126 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 3578126 ']' 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 3578126 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3578126 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3578126' 00:29:23.236 killing process with pid 3578126 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@969 -- # kill 3578126 00:29:23.236 18:24:14 nvmf_dif -- common/autotest_common.sh@974 -- # wait 3578126 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:23.236 18:24:14 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:24.173 Waiting for block devices as requested 00:29:24.173 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:29:24.432 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:24.432 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:24.432 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:24.691 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:24.691 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:24.691 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:24.950 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:24.950 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:24.950 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:24.950 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:25.209 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:25.209 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:25.209 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:25.209 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:25.468 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:25.468 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:25.468 18:24:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:25.468 18:24:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:25.468 18:24:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:25.468 18:24:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:25.468 18:24:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.468 18:24:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:25.468 18:24:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.004 18:24:20 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:28.004 00:29:28.004 real 1m12.722s 00:29:28.004 user 7m9.194s 00:29:28.004 sys 0m18.029s 00:29:28.004 18:24:20 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:28.004 18:24:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:28.004 ************************************ 00:29:28.004 END TEST nvmf_dif 00:29:28.004 ************************************ 00:29:28.004 18:24:20 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:28.004 18:24:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:28.004 18:24:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:28.004 18:24:20 -- common/autotest_common.sh@10 -- # set +x 00:29:28.004 ************************************ 00:29:28.004 START TEST nvmf_abort_qd_sizes 00:29:28.004 ************************************ 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:28.004 * Looking for test storage... 00:29:28.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.004 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:28.005 18:24:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.269 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:33.270 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:33.270 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:33.270 Found net devices under 0000:86:00.0: cvl_0_0 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:33.270 Found net devices under 0000:86:00.1: cvl_0_1 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:33.270 18:24:25 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:33.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:29:33.270 00:29:33.270 --- 10.0.0.2 ping statistics --- 00:29:33.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.270 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:29:33.270 00:29:33.270 --- 10.0.0.1 ping statistics --- 00:29:33.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.270 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:33.270 18:24:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:35.800 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:35.800 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:37.176 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3595022 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3595022 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 3595022 ']' 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:37.434 18:24:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:37.434 [2024-07-24 18:24:30.351715] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:29:37.434 [2024-07-24 18:24:30.351760] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:37.434 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.434 [2024-07-24 18:24:30.411105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:37.434 [2024-07-24 18:24:30.492519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:37.434 [2024-07-24 18:24:30.492557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:37.434 [2024-07-24 18:24:30.492563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:37.434 [2024-07-24 18:24:30.492569] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:37.434 [2024-07-24 18:24:30.492574] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:37.434 [2024-07-24 18:24:30.492615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.434 [2024-07-24 18:24:30.492714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:37.434 [2024-07-24 18:24:30.492827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:37.434 [2024-07-24 18:24:30.492828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5f:00.0 ]] 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5f:00.0 ]] 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5f:00.0 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5f:00.0 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:38.365 18:24:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:38.365 ************************************ 00:29:38.365 START TEST spdk_target_abort 00:29:38.365 ************************************ 00:29:38.365 18:24:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:29:38.365 18:24:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:38.365 18:24:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5f:00.0 -b spdk_target 00:29:38.365 18:24:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:38.365 18:24:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 spdk_targetn1 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 [2024-07-24 18:24:34.068012] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:41.646 [2024-07-24 18:24:34.104836] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:41.646 18:24:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:41.647 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.233 Initializing NVMe Controllers 00:29:44.233 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:44.233 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:44.233 Initialization complete. Launching workers. 00:29:44.233 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15179, failed: 0 00:29:44.233 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1252, failed to submit 13927 00:29:44.233 success 704, unsuccess 548, failed 0 00:29:44.233 18:24:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:44.233 18:24:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:44.490 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.801 Initializing NVMe Controllers 00:29:47.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:47.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:47.801 Initialization complete. Launching workers. 00:29:47.801 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8518, failed: 0 00:29:47.801 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1250, failed to submit 7268 00:29:47.801 success 306, unsuccess 944, failed 0 00:29:47.801 18:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:47.801 18:24:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:47.801 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.084 Initializing NVMe Controllers 00:29:51.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:51.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:51.084 Initialization complete. Launching workers. 00:29:51.084 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38754, failed: 0 00:29:51.084 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2840, failed to submit 35914 00:29:51.084 success 595, unsuccess 2245, failed 0 00:29:51.084 18:24:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:51.084 18:24:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.084 18:24:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:51.084 18:24:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:51.084 18:24:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:51.084 18:24:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:51.084 18:24:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3595022 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 3595022 ']' 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 3595022 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3595022 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3595022' 00:29:52.984 killing process with pid 3595022 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 3595022 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 3595022 00:29:52.984 00:29:52.984 real 0m14.729s 00:29:52.984 user 0m58.760s 00:29:52.984 sys 0m2.255s 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:52.984 18:24:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:52.984 ************************************ 00:29:52.984 END TEST spdk_target_abort 00:29:52.984 ************************************ 00:29:52.984 18:24:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:52.984 18:24:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:52.984 18:24:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:52.984 18:24:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:52.984 ************************************ 00:29:52.984 START TEST kernel_target_abort 00:29:52.984 ************************************ 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:52.984 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:53.243 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:53.243 18:24:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:55.773 Waiting for block devices as requested 00:29:55.773 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:29:55.773 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:55.773 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:55.773 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:56.031 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:56.031 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:56.031 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:56.031 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:56.289 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:56.289 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:56.289 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:56.289 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:56.548 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:56.548 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:56.548 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:56.807 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:56.807 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:56.807 No valid GPT data, bailing 00:29:56.807 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 --hostid=803833e2-2ada-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:57.063 00:29:57.063 Discovery Log Number of Records 2, Generation counter 2 00:29:57.063 =====Discovery Log Entry 0====== 00:29:57.063 trtype: tcp 00:29:57.063 adrfam: ipv4 00:29:57.063 subtype: current discovery subsystem 00:29:57.063 treq: not specified, sq flow control disable supported 00:29:57.063 portid: 1 00:29:57.063 trsvcid: 4420 00:29:57.063 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:57.063 traddr: 10.0.0.1 00:29:57.063 eflags: none 00:29:57.063 sectype: none 00:29:57.063 =====Discovery Log Entry 1====== 00:29:57.063 trtype: tcp 00:29:57.063 adrfam: ipv4 00:29:57.063 subtype: nvme subsystem 00:29:57.063 treq: not specified, sq flow control disable supported 00:29:57.063 portid: 1 00:29:57.063 trsvcid: 4420 00:29:57.063 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:57.063 traddr: 10.0.0.1 00:29:57.063 eflags: none 00:29:57.063 sectype: none 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:57.063 18:24:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:57.063 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:57.063 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.063 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:57.063 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.063 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:57.063 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.064 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:57.064 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.064 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:57.064 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:57.064 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:57.064 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:57.064 18:24:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:57.064 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.347 Initializing NVMe Controllers 00:30:00.347 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:00.347 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:00.347 Initialization complete. Launching workers. 00:30:00.347 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 90221, failed: 0 00:30:00.348 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 90221, failed to submit 0 00:30:00.348 success 0, unsuccess 90221, failed 0 00:30:00.348 18:24:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:00.348 18:24:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:00.348 EAL: No free 2048 kB hugepages reported on node 1 00:30:03.628 Initializing NVMe Controllers 00:30:03.628 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:03.628 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:03.628 Initialization complete. Launching workers. 00:30:03.628 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145281, failed: 0 00:30:03.628 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36274, failed to submit 109007 00:30:03.628 success 0, unsuccess 36274, failed 0 00:30:03.628 18:24:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:03.628 18:24:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:03.628 EAL: No free 2048 kB hugepages reported on node 1 00:30:06.910 Initializing NVMe Controllers 00:30:06.910 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:06.910 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:06.910 Initialization complete. Launching workers. 00:30:06.910 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 139036, failed: 0 00:30:06.910 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34818, failed to submit 104218 00:30:06.910 success 0, unsuccess 34818, failed 0 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:06.910 18:24:59 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:08.812 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:08.812 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:10.188 0000:5f:00.0 (8086 0a54): nvme -> vfio-pci 00:30:10.445 00:30:10.445 real 0m17.249s 00:30:10.445 user 0m8.408s 00:30:10.445 sys 0m4.700s 00:30:10.445 18:25:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:10.445 18:25:03 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:10.445 ************************************ 00:30:10.445 END TEST kernel_target_abort 00:30:10.445 ************************************ 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:10.445 rmmod nvme_tcp 00:30:10.445 rmmod nvme_fabrics 00:30:10.445 rmmod nvme_keyring 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3595022 ']' 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3595022 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 3595022 ']' 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 3595022 00:30:10.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (3595022) - No such process 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 3595022 is not found' 00:30:10.445 Process with pid 3595022 is not found 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:10.445 18:25:03 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:12.974 Waiting for block devices as requested 00:30:12.974 0000:5f:00.0 (8086 0a54): vfio-pci -> nvme 00:30:12.974 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:13.245 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:13.245 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:13.245 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:13.245 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:13.559 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:13.559 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:13.559 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:13.559 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:13.559 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:13.818 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:13.818 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:13.818 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:13.818 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:14.076 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:14.076 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:14.076 18:25:07 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:14.076 18:25:07 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:14.076 18:25:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:14.076 18:25:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:14.076 18:25:07 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.076 18:25:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:14.076 18:25:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.608 18:25:09 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:16.608 00:30:16.608 real 0m48.569s 00:30:16.608 user 1m11.226s 00:30:16.608 sys 0m14.936s 00:30:16.608 18:25:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:16.608 18:25:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:16.608 ************************************ 00:30:16.608 END TEST nvmf_abort_qd_sizes 00:30:16.608 ************************************ 00:30:16.608 18:25:09 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:16.608 18:25:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:16.608 18:25:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:16.608 18:25:09 -- common/autotest_common.sh@10 -- # set +x 00:30:16.608 ************************************ 00:30:16.608 START TEST keyring_file 00:30:16.608 ************************************ 00:30:16.608 18:25:09 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:16.608 * Looking for test storage... 00:30:16.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:16.608 18:25:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:16.608 18:25:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.608 18:25:09 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.608 18:25:09 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.608 18:25:09 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.608 18:25:09 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.608 18:25:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.608 18:25:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.608 18:25:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.609 18:25:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:16.609 18:25:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.uPrD3k2Mwe 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.uPrD3k2Mwe 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.uPrD3k2Mwe 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.uPrD3k2Mwe 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.umLhXSVZRB 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:16.609 18:25:09 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.umLhXSVZRB 00:30:16.609 18:25:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.umLhXSVZRB 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.umLhXSVZRB 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=3603815 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3603815 00:30:16.609 18:25:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:16.609 18:25:09 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3603815 ']' 00:30:16.609 18:25:09 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:16.609 18:25:09 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:16.609 18:25:09 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:16.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:16.609 18:25:09 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:16.609 18:25:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:16.609 [2024-07-24 18:25:09.536168] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:30:16.609 [2024-07-24 18:25:09.536221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603815 ] 00:30:16.609 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.609 [2024-07-24 18:25:09.588842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:16.609 [2024-07-24 18:25:09.668800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:17.537 18:25:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:17.537 [2024-07-24 18:25:10.319814] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.537 null0 00:30:17.537 [2024-07-24 18:25:10.351868] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:17.537 [2024-07-24 18:25:10.352060] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:17.537 [2024-07-24 18:25:10.359880] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.537 18:25:10 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:17.537 [2024-07-24 18:25:10.371911] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:17.537 request: 00:30:17.537 { 00:30:17.537 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.537 "secure_channel": false, 00:30:17.537 "listen_address": { 00:30:17.537 "trtype": "tcp", 00:30:17.537 "traddr": "127.0.0.1", 00:30:17.537 "trsvcid": "4420" 00:30:17.537 }, 00:30:17.537 "method": "nvmf_subsystem_add_listener", 00:30:17.537 "req_id": 1 00:30:17.537 } 00:30:17.537 Got JSON-RPC error response 00:30:17.537 response: 00:30:17.537 { 00:30:17.537 "code": -32602, 00:30:17.537 "message": "Invalid parameters" 00:30:17.537 } 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:17.537 18:25:10 keyring_file -- keyring/file.sh@46 -- # bperfpid=3603836 00:30:17.537 18:25:10 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3603836 /var/tmp/bperf.sock 00:30:17.537 18:25:10 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3603836 ']' 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:17.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:17.537 18:25:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:17.537 [2024-07-24 18:25:10.420933] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:30:17.537 [2024-07-24 18:25:10.420972] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3603836 ] 00:30:17.537 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.537 [2024-07-24 18:25:10.475777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.537 [2024-07-24 18:25:10.555021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.471 18:25:11 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:18.471 18:25:11 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:18.471 18:25:11 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:18.471 18:25:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:18.471 18:25:11 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.umLhXSVZRB 00:30:18.471 18:25:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.umLhXSVZRB 00:30:18.729 18:25:11 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:18.729 18:25:11 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:18.729 18:25:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.729 18:25:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.729 18:25:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.729 18:25:11 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.uPrD3k2Mwe == \/\t\m\p\/\t\m\p\.\u\P\r\D\3\k\2\M\w\e ]] 00:30:18.729 18:25:11 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:18.729 18:25:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:18.729 18:25:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.729 18:25:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.729 18:25:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:18.987 18:25:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.umLhXSVZRB == \/\t\m\p\/\t\m\p\.\u\m\L\h\X\S\V\Z\R\B ]] 00:30:18.987 18:25:11 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:18.987 18:25:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:18.987 18:25:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.987 18:25:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.987 18:25:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.987 18:25:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.245 18:25:12 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:19.245 18:25:12 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:19.245 18:25:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:19.245 18:25:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.245 18:25:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.245 18:25:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:19.245 18:25:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.245 18:25:12 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:19.245 18:25:12 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.245 18:25:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:19.502 [2024-07-24 18:25:12.435601] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:19.502 nvme0n1 00:30:19.502 18:25:12 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:19.502 18:25:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:19.502 18:25:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.502 18:25:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.502 18:25:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.502 18:25:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.760 18:25:12 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:19.760 18:25:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:19.760 18:25:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:19.760 18:25:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.760 18:25:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.760 18:25:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:19.760 18:25:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:20.017 18:25:12 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:20.017 18:25:12 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:20.017 Running I/O for 1 seconds... 00:30:20.949 00:30:20.949 Latency(us) 00:30:20.949 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.949 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:20.949 nvme0n1 : 1.00 17285.73 67.52 0.00 0.00 7387.48 3651.29 12795.12 00:30:20.949 =================================================================================================================== 00:30:20.949 Total : 17285.73 67.52 0.00 0.00 7387.48 3651.29 12795.12 00:30:20.949 0 00:30:20.949 18:25:13 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:20.949 18:25:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:21.207 18:25:14 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:21.207 18:25:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:21.207 18:25:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.207 18:25:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.207 18:25:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:21.207 18:25:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.465 18:25:14 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:21.465 18:25:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:21.465 18:25:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:21.465 18:25:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.465 18:25:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:21.465 18:25:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.465 18:25:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.465 18:25:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:21.465 18:25:14 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.465 18:25:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:21.465 18:25:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.465 18:25:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:21.465 18:25:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.465 18:25:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:21.465 18:25:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:21.465 18:25:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.465 18:25:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:21.723 [2024-07-24 18:25:14.685845] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:21.723 [2024-07-24 18:25:14.686578] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf74820 (107): Transport endpoint is not connected 00:30:21.723 [2024-07-24 18:25:14.687572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf74820 (9): Bad file descriptor 00:30:21.723 [2024-07-24 18:25:14.688574] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:21.723 [2024-07-24 18:25:14.688585] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:21.723 [2024-07-24 18:25:14.688591] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:21.723 request: 00:30:21.723 { 00:30:21.723 "name": "nvme0", 00:30:21.723 "trtype": "tcp", 00:30:21.723 "traddr": "127.0.0.1", 00:30:21.723 "adrfam": "ipv4", 00:30:21.723 "trsvcid": "4420", 00:30:21.723 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.723 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.723 "prchk_reftag": false, 00:30:21.723 "prchk_guard": false, 00:30:21.723 "hdgst": false, 00:30:21.723 "ddgst": false, 00:30:21.723 "psk": "key1", 00:30:21.723 "method": "bdev_nvme_attach_controller", 00:30:21.723 "req_id": 1 00:30:21.723 } 00:30:21.723 Got JSON-RPC error response 00:30:21.723 response: 00:30:21.723 { 00:30:21.723 "code": -5, 00:30:21.723 "message": "Input/output error" 00:30:21.723 } 00:30:21.723 18:25:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:21.723 18:25:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:21.723 18:25:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:21.723 18:25:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:21.723 18:25:14 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:21.723 18:25:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:21.723 18:25:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.723 18:25:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.723 18:25:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:21.723 18:25:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.981 18:25:14 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:21.981 18:25:14 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:21.981 18:25:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:21.981 18:25:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:21.981 18:25:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:21.981 18:25:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:21.981 18:25:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:21.981 18:25:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:21.981 18:25:15 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:21.981 18:25:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:22.239 18:25:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:22.239 18:25:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:22.496 18:25:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:22.496 18:25:15 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:22.496 18:25:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.497 18:25:15 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:22.497 18:25:15 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.uPrD3k2Mwe 00:30:22.497 18:25:15 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:22.497 18:25:15 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:22.497 18:25:15 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:22.497 18:25:15 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:22.497 18:25:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.497 18:25:15 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:22.497 18:25:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:22.497 18:25:15 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:22.497 18:25:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:22.754 [2024-07-24 18:25:15.717580] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.uPrD3k2Mwe': 0100660 00:30:22.754 [2024-07-24 18:25:15.717606] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:22.754 request: 00:30:22.754 { 00:30:22.754 "name": "key0", 00:30:22.754 "path": "/tmp/tmp.uPrD3k2Mwe", 00:30:22.754 "method": "keyring_file_add_key", 00:30:22.754 "req_id": 1 00:30:22.754 } 00:30:22.754 Got JSON-RPC error response 00:30:22.754 response: 00:30:22.754 { 00:30:22.754 "code": -1, 00:30:22.754 "message": "Operation not permitted" 00:30:22.754 } 00:30:22.754 18:25:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:22.754 18:25:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:22.754 18:25:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:22.754 18:25:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:22.754 18:25:15 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.uPrD3k2Mwe 00:30:22.754 18:25:15 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:22.754 18:25:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.uPrD3k2Mwe 00:30:23.012 18:25:15 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.uPrD3k2Mwe 00:30:23.012 18:25:15 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:23.012 18:25:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:23.012 18:25:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:23.012 18:25:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:23.012 18:25:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:23.012 18:25:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:23.012 18:25:16 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:23.012 18:25:16 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:23.012 18:25:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:30:23.012 18:25:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:23.012 18:25:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:23.012 18:25:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:23.012 18:25:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:23.012 18:25:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:23.012 18:25:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:23.270 18:25:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:23.271 [2024-07-24 18:25:16.246979] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.uPrD3k2Mwe': No such file or directory 00:30:23.271 [2024-07-24 18:25:16.246999] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:23.271 [2024-07-24 18:25:16.247018] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:23.271 [2024-07-24 18:25:16.247023] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:23.271 [2024-07-24 18:25:16.247029] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:23.271 request: 00:30:23.271 { 00:30:23.271 "name": "nvme0", 00:30:23.271 "trtype": "tcp", 00:30:23.271 "traddr": "127.0.0.1", 00:30:23.271 "adrfam": "ipv4", 00:30:23.271 "trsvcid": "4420", 00:30:23.271 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:23.271 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:23.271 "prchk_reftag": false, 00:30:23.271 "prchk_guard": false, 00:30:23.271 "hdgst": false, 00:30:23.271 "ddgst": false, 00:30:23.271 "psk": "key0", 00:30:23.271 "method": "bdev_nvme_attach_controller", 00:30:23.271 "req_id": 1 00:30:23.271 } 00:30:23.271 Got JSON-RPC error response 00:30:23.271 response: 00:30:23.271 { 00:30:23.271 "code": -19, 00:30:23.271 "message": "No such device" 00:30:23.271 } 00:30:23.271 18:25:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:30:23.271 18:25:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:23.271 18:25:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:23.271 18:25:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:23.271 18:25:16 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:23.271 18:25:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:23.529 18:25:16 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4CxECc0iXG 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:23.529 18:25:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:23.529 18:25:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:23.529 18:25:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:23.529 18:25:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:23.529 18:25:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:23.529 18:25:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4CxECc0iXG 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4CxECc0iXG 00:30:23.529 18:25:16 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4CxECc0iXG 00:30:23.529 18:25:16 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4CxECc0iXG 00:30:23.529 18:25:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4CxECc0iXG 00:30:23.787 18:25:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:23.787 18:25:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:24.045 nvme0n1 00:30:24.045 18:25:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:24.045 18:25:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:24.045 18:25:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:24.045 18:25:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:24.045 18:25:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:24.045 18:25:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:24.045 18:25:17 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:24.045 18:25:17 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:24.045 18:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:24.303 18:25:17 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:24.303 18:25:17 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:24.303 18:25:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:24.303 18:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:24.303 18:25:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:24.561 18:25:17 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:24.561 18:25:17 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:24.561 18:25:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:24.561 18:25:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:24.561 18:25:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:24.561 18:25:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:24.561 18:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:24.561 18:25:17 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:24.561 18:25:17 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:24.561 18:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:24.819 18:25:17 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:24.819 18:25:17 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:24.819 18:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:25.076 18:25:17 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:25.076 18:25:17 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4CxECc0iXG 00:30:25.076 18:25:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4CxECc0iXG 00:30:25.076 18:25:18 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.umLhXSVZRB 00:30:25.076 18:25:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.umLhXSVZRB 00:30:25.335 18:25:18 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:25.335 18:25:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:25.593 nvme0n1 00:30:25.593 18:25:18 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:25.593 18:25:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:25.852 18:25:18 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:25.852 "subsystems": [ 00:30:25.852 { 00:30:25.852 "subsystem": "keyring", 00:30:25.852 "config": [ 00:30:25.852 { 00:30:25.852 "method": "keyring_file_add_key", 00:30:25.852 "params": { 00:30:25.852 "name": "key0", 00:30:25.852 "path": "/tmp/tmp.4CxECc0iXG" 00:30:25.852 } 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "method": "keyring_file_add_key", 00:30:25.852 "params": { 00:30:25.852 "name": "key1", 00:30:25.852 "path": "/tmp/tmp.umLhXSVZRB" 00:30:25.852 } 00:30:25.852 } 00:30:25.852 ] 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "subsystem": "iobuf", 00:30:25.852 "config": [ 00:30:25.852 { 00:30:25.852 "method": "iobuf_set_options", 00:30:25.852 "params": { 00:30:25.852 "small_pool_count": 8192, 00:30:25.852 "large_pool_count": 1024, 00:30:25.852 "small_bufsize": 8192, 00:30:25.852 "large_bufsize": 135168 00:30:25.852 } 00:30:25.852 } 00:30:25.852 ] 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "subsystem": "sock", 00:30:25.852 "config": [ 00:30:25.852 { 00:30:25.852 "method": "sock_set_default_impl", 00:30:25.852 "params": { 00:30:25.852 "impl_name": "posix" 00:30:25.852 } 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "method": "sock_impl_set_options", 00:30:25.852 "params": { 00:30:25.852 "impl_name": "ssl", 00:30:25.852 "recv_buf_size": 4096, 00:30:25.852 "send_buf_size": 4096, 00:30:25.852 "enable_recv_pipe": true, 00:30:25.852 "enable_quickack": false, 00:30:25.852 "enable_placement_id": 0, 00:30:25.852 "enable_zerocopy_send_server": true, 00:30:25.852 "enable_zerocopy_send_client": false, 00:30:25.852 "zerocopy_threshold": 0, 00:30:25.852 "tls_version": 0, 00:30:25.852 "enable_ktls": false 00:30:25.852 } 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "method": "sock_impl_set_options", 00:30:25.852 "params": { 00:30:25.852 "impl_name": "posix", 00:30:25.852 "recv_buf_size": 2097152, 00:30:25.852 "send_buf_size": 2097152, 00:30:25.852 "enable_recv_pipe": true, 00:30:25.852 "enable_quickack": false, 00:30:25.852 "enable_placement_id": 0, 00:30:25.852 "enable_zerocopy_send_server": true, 00:30:25.852 "enable_zerocopy_send_client": false, 00:30:25.852 "zerocopy_threshold": 0, 00:30:25.852 "tls_version": 0, 00:30:25.852 "enable_ktls": false 00:30:25.852 } 00:30:25.852 } 00:30:25.852 ] 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "subsystem": "vmd", 00:30:25.852 "config": [] 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "subsystem": "accel", 00:30:25.852 "config": [ 00:30:25.852 { 00:30:25.852 "method": "accel_set_options", 00:30:25.852 "params": { 00:30:25.852 "small_cache_size": 128, 00:30:25.852 "large_cache_size": 16, 00:30:25.852 "task_count": 2048, 00:30:25.852 "sequence_count": 2048, 00:30:25.852 "buf_count": 2048 00:30:25.852 } 00:30:25.852 } 00:30:25.852 ] 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "subsystem": "bdev", 00:30:25.852 "config": [ 00:30:25.852 { 00:30:25.852 "method": "bdev_set_options", 00:30:25.852 "params": { 00:30:25.852 "bdev_io_pool_size": 65535, 00:30:25.852 "bdev_io_cache_size": 256, 00:30:25.852 "bdev_auto_examine": true, 00:30:25.852 "iobuf_small_cache_size": 128, 00:30:25.852 "iobuf_large_cache_size": 16 00:30:25.852 } 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "method": "bdev_raid_set_options", 00:30:25.852 "params": { 00:30:25.852 "process_window_size_kb": 1024, 00:30:25.852 "process_max_bandwidth_mb_sec": 0 00:30:25.852 } 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "method": "bdev_iscsi_set_options", 00:30:25.852 "params": { 00:30:25.852 "timeout_sec": 30 00:30:25.852 } 00:30:25.852 }, 00:30:25.852 { 00:30:25.852 "method": "bdev_nvme_set_options", 00:30:25.852 "params": { 00:30:25.852 "action_on_timeout": "none", 00:30:25.852 "timeout_us": 0, 00:30:25.852 "timeout_admin_us": 0, 00:30:25.852 "keep_alive_timeout_ms": 10000, 00:30:25.852 "arbitration_burst": 0, 00:30:25.852 "low_priority_weight": 0, 00:30:25.852 "medium_priority_weight": 0, 00:30:25.852 "high_priority_weight": 0, 00:30:25.853 "nvme_adminq_poll_period_us": 10000, 00:30:25.853 "nvme_ioq_poll_period_us": 0, 00:30:25.853 "io_queue_requests": 512, 00:30:25.853 "delay_cmd_submit": true, 00:30:25.853 "transport_retry_count": 4, 00:30:25.853 "bdev_retry_count": 3, 00:30:25.853 "transport_ack_timeout": 0, 00:30:25.853 "ctrlr_loss_timeout_sec": 0, 00:30:25.853 "reconnect_delay_sec": 0, 00:30:25.853 "fast_io_fail_timeout_sec": 0, 00:30:25.853 "disable_auto_failback": false, 00:30:25.853 "generate_uuids": false, 00:30:25.853 "transport_tos": 0, 00:30:25.853 "nvme_error_stat": false, 00:30:25.853 "rdma_srq_size": 0, 00:30:25.853 "io_path_stat": false, 00:30:25.853 "allow_accel_sequence": false, 00:30:25.853 "rdma_max_cq_size": 0, 00:30:25.853 "rdma_cm_event_timeout_ms": 0, 00:30:25.853 "dhchap_digests": [ 00:30:25.853 "sha256", 00:30:25.853 "sha384", 00:30:25.853 "sha512" 00:30:25.853 ], 00:30:25.853 "dhchap_dhgroups": [ 00:30:25.853 "null", 00:30:25.853 "ffdhe2048", 00:30:25.853 "ffdhe3072", 00:30:25.853 "ffdhe4096", 00:30:25.853 "ffdhe6144", 00:30:25.853 "ffdhe8192" 00:30:25.853 ] 00:30:25.853 } 00:30:25.853 }, 00:30:25.853 { 00:30:25.853 "method": "bdev_nvme_attach_controller", 00:30:25.853 "params": { 00:30:25.853 "name": "nvme0", 00:30:25.853 "trtype": "TCP", 00:30:25.853 "adrfam": "IPv4", 00:30:25.853 "traddr": "127.0.0.1", 00:30:25.853 "trsvcid": "4420", 00:30:25.853 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.853 "prchk_reftag": false, 00:30:25.853 "prchk_guard": false, 00:30:25.853 "ctrlr_loss_timeout_sec": 0, 00:30:25.853 "reconnect_delay_sec": 0, 00:30:25.853 "fast_io_fail_timeout_sec": 0, 00:30:25.853 "psk": "key0", 00:30:25.853 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:25.853 "hdgst": false, 00:30:25.853 "ddgst": false 00:30:25.853 } 00:30:25.853 }, 00:30:25.853 { 00:30:25.853 "method": "bdev_nvme_set_hotplug", 00:30:25.853 "params": { 00:30:25.853 "period_us": 100000, 00:30:25.853 "enable": false 00:30:25.853 } 00:30:25.853 }, 00:30:25.853 { 00:30:25.853 "method": "bdev_wait_for_examine" 00:30:25.853 } 00:30:25.853 ] 00:30:25.853 }, 00:30:25.853 { 00:30:25.853 "subsystem": "nbd", 00:30:25.853 "config": [] 00:30:25.853 } 00:30:25.853 ] 00:30:25.853 }' 00:30:25.853 18:25:18 keyring_file -- keyring/file.sh@114 -- # killprocess 3603836 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3603836 ']' 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3603836 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3603836 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3603836' 00:30:25.853 killing process with pid 3603836 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@969 -- # kill 3603836 00:30:25.853 Received shutdown signal, test time was about 1.000000 seconds 00:30:25.853 00:30:25.853 Latency(us) 00:30:25.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.853 =================================================================================================================== 00:30:25.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.853 18:25:18 keyring_file -- common/autotest_common.sh@974 -- # wait 3603836 00:30:26.111 18:25:18 keyring_file -- keyring/file.sh@117 -- # bperfpid=3605349 00:30:26.111 18:25:18 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3605349 /var/tmp/bperf.sock 00:30:26.111 18:25:18 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3605349 ']' 00:30:26.111 18:25:18 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:26.112 18:25:18 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:26.112 18:25:18 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:26.112 18:25:18 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:26.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:26.112 18:25:18 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:26.112 "subsystems": [ 00:30:26.112 { 00:30:26.112 "subsystem": "keyring", 00:30:26.112 "config": [ 00:30:26.112 { 00:30:26.112 "method": "keyring_file_add_key", 00:30:26.112 "params": { 00:30:26.112 "name": "key0", 00:30:26.112 "path": "/tmp/tmp.4CxECc0iXG" 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "keyring_file_add_key", 00:30:26.112 "params": { 00:30:26.112 "name": "key1", 00:30:26.112 "path": "/tmp/tmp.umLhXSVZRB" 00:30:26.112 } 00:30:26.112 } 00:30:26.112 ] 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "subsystem": "iobuf", 00:30:26.112 "config": [ 00:30:26.112 { 00:30:26.112 "method": "iobuf_set_options", 00:30:26.112 "params": { 00:30:26.112 "small_pool_count": 8192, 00:30:26.112 "large_pool_count": 1024, 00:30:26.112 "small_bufsize": 8192, 00:30:26.112 "large_bufsize": 135168 00:30:26.112 } 00:30:26.112 } 00:30:26.112 ] 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "subsystem": "sock", 00:30:26.112 "config": [ 00:30:26.112 { 00:30:26.112 "method": "sock_set_default_impl", 00:30:26.112 "params": { 00:30:26.112 "impl_name": "posix" 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "sock_impl_set_options", 00:30:26.112 "params": { 00:30:26.112 "impl_name": "ssl", 00:30:26.112 "recv_buf_size": 4096, 00:30:26.112 "send_buf_size": 4096, 00:30:26.112 "enable_recv_pipe": true, 00:30:26.112 "enable_quickack": false, 00:30:26.112 "enable_placement_id": 0, 00:30:26.112 "enable_zerocopy_send_server": true, 00:30:26.112 "enable_zerocopy_send_client": false, 00:30:26.112 "zerocopy_threshold": 0, 00:30:26.112 "tls_version": 0, 00:30:26.112 "enable_ktls": false 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "sock_impl_set_options", 00:30:26.112 "params": { 00:30:26.112 "impl_name": "posix", 00:30:26.112 "recv_buf_size": 2097152, 00:30:26.112 "send_buf_size": 2097152, 00:30:26.112 "enable_recv_pipe": true, 00:30:26.112 "enable_quickack": false, 00:30:26.112 "enable_placement_id": 0, 00:30:26.112 "enable_zerocopy_send_server": true, 00:30:26.112 "enable_zerocopy_send_client": false, 00:30:26.112 "zerocopy_threshold": 0, 00:30:26.112 "tls_version": 0, 00:30:26.112 "enable_ktls": false 00:30:26.112 } 00:30:26.112 } 00:30:26.112 ] 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "subsystem": "vmd", 00:30:26.112 "config": [] 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "subsystem": "accel", 00:30:26.112 "config": [ 00:30:26.112 { 00:30:26.112 "method": "accel_set_options", 00:30:26.112 "params": { 00:30:26.112 "small_cache_size": 128, 00:30:26.112 "large_cache_size": 16, 00:30:26.112 "task_count": 2048, 00:30:26.112 "sequence_count": 2048, 00:30:26.112 "buf_count": 2048 00:30:26.112 } 00:30:26.112 } 00:30:26.112 ] 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "subsystem": "bdev", 00:30:26.112 "config": [ 00:30:26.112 { 00:30:26.112 "method": "bdev_set_options", 00:30:26.112 "params": { 00:30:26.112 "bdev_io_pool_size": 65535, 00:30:26.112 "bdev_io_cache_size": 256, 00:30:26.112 "bdev_auto_examine": true, 00:30:26.112 "iobuf_small_cache_size": 128, 00:30:26.112 "iobuf_large_cache_size": 16 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "bdev_raid_set_options", 00:30:26.112 "params": { 00:30:26.112 "process_window_size_kb": 1024, 00:30:26.112 "process_max_bandwidth_mb_sec": 0 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "bdev_iscsi_set_options", 00:30:26.112 "params": { 00:30:26.112 "timeout_sec": 30 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "bdev_nvme_set_options", 00:30:26.112 "params": { 00:30:26.112 "action_on_timeout": "none", 00:30:26.112 "timeout_us": 0, 00:30:26.112 "timeout_admin_us": 0, 00:30:26.112 "keep_alive_timeout_ms": 10000, 00:30:26.112 "arbitration_burst": 0, 00:30:26.112 "low_priority_weight": 0, 00:30:26.112 "medium_priority_weight": 0, 00:30:26.112 "high_priority_weight": 0, 00:30:26.112 "nvme_adminq_poll_period_us": 10000, 00:30:26.112 "nvme_ioq_poll_period_us": 0, 00:30:26.112 "io_queue_requests": 512, 00:30:26.112 "delay_cmd_submit": true, 00:30:26.112 "transport_retry_count": 4, 00:30:26.112 "bdev_retry_count": 3, 00:30:26.112 "transport_ack_timeout": 0, 00:30:26.112 "ctrlr_loss_timeout_sec": 0, 00:30:26.112 "reconnect_delay_sec": 0, 00:30:26.112 "fast_io_fail_timeout_sec": 0, 00:30:26.112 "disable_auto_failback": false, 00:30:26.112 "generate_uuids": false, 00:30:26.112 "transport_tos": 0, 00:30:26.112 "nvme_error_stat": false, 00:30:26.112 "rdma_srq_size": 0, 00:30:26.112 "io_path_stat": false, 00:30:26.112 "allow_accel_sequence": false, 00:30:26.112 "rdma_max_cq_size": 0, 00:30:26.112 "rdma_cm_event_timeout_ms": 0, 00:30:26.112 "dhchap_digests": [ 00:30:26.112 "sha256", 00:30:26.112 "sha384", 00:30:26.112 "sha512" 00:30:26.112 ], 00:30:26.112 "dhchap_dhgroups": [ 00:30:26.112 "null", 00:30:26.112 "ffdhe2048", 00:30:26.112 "ffdhe3072", 00:30:26.112 "ffdhe4096", 00:30:26.112 "ffdhe6144", 00:30:26.112 "ffdhe8192" 00:30:26.112 ] 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "bdev_nvme_attach_controller", 00:30:26.112 "params": { 00:30:26.112 "name": "nvme0", 00:30:26.112 "trtype": "TCP", 00:30:26.112 "adrfam": "IPv4", 00:30:26.112 "traddr": "127.0.0.1", 00:30:26.112 "trsvcid": "4420", 00:30:26.112 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:26.112 "prchk_reftag": false, 00:30:26.112 "prchk_guard": false, 00:30:26.112 "ctrlr_loss_timeout_sec": 0, 00:30:26.112 "reconnect_delay_sec": 0, 00:30:26.112 "fast_io_fail_timeout_sec": 0, 00:30:26.112 "psk": "key0", 00:30:26.112 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:26.112 "hdgst": false, 00:30:26.112 "ddgst": false 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "bdev_nvme_set_hotplug", 00:30:26.112 "params": { 00:30:26.112 "period_us": 100000, 00:30:26.112 "enable": false 00:30:26.112 } 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "method": "bdev_wait_for_examine" 00:30:26.112 } 00:30:26.112 ] 00:30:26.112 }, 00:30:26.112 { 00:30:26.112 "subsystem": "nbd", 00:30:26.112 "config": [] 00:30:26.112 } 00:30:26.112 ] 00:30:26.112 }' 00:30:26.112 18:25:18 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:26.112 18:25:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:26.112 [2024-07-24 18:25:19.003107] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:30:26.112 [2024-07-24 18:25:19.003156] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3605349 ] 00:30:26.112 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.112 [2024-07-24 18:25:19.057656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:26.112 [2024-07-24 18:25:19.125983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.370 [2024-07-24 18:25:19.283774] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:26.936 18:25:19 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:26.936 18:25:19 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:30:26.936 18:25:19 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:26.936 18:25:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.936 18:25:19 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:26.936 18:25:19 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:26.936 18:25:19 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:26.936 18:25:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:26.936 18:25:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:26.936 18:25:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:26.936 18:25:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:26.936 18:25:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.194 18:25:20 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:27.194 18:25:20 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:27.194 18:25:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:27.194 18:25:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:27.194 18:25:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:27.194 18:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.194 18:25:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:27.452 18:25:20 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:27.452 18:25:20 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:27.452 18:25:20 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:27.452 18:25:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:27.452 18:25:20 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:27.452 18:25:20 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:27.452 18:25:20 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4CxECc0iXG /tmp/tmp.umLhXSVZRB 00:30:27.452 18:25:20 keyring_file -- keyring/file.sh@20 -- # killprocess 3605349 00:30:27.452 18:25:20 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3605349 ']' 00:30:27.452 18:25:20 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3605349 00:30:27.452 18:25:20 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:27.452 18:25:20 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.452 18:25:20 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3605349 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3605349' 00:30:27.710 killing process with pid 3605349 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@969 -- # kill 3605349 00:30:27.710 Received shutdown signal, test time was about 1.000000 seconds 00:30:27.710 00:30:27.710 Latency(us) 00:30:27.710 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.710 =================================================================================================================== 00:30:27.710 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@974 -- # wait 3605349 00:30:27.710 18:25:20 keyring_file -- keyring/file.sh@21 -- # killprocess 3603815 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3603815 ']' 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3603815 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@955 -- # uname 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3603815 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3603815' 00:30:27.710 killing process with pid 3603815 00:30:27.710 18:25:20 keyring_file -- common/autotest_common.sh@969 -- # kill 3603815 00:30:27.710 [2024-07-24 18:25:20.792547] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:27.968 18:25:20 keyring_file -- common/autotest_common.sh@974 -- # wait 3603815 00:30:28.226 00:30:28.226 real 0m11.841s 00:30:28.226 user 0m28.258s 00:30:28.226 sys 0m2.673s 00:30:28.226 18:25:21 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:28.226 18:25:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:28.226 ************************************ 00:30:28.226 END TEST keyring_file 00:30:28.226 ************************************ 00:30:28.226 18:25:21 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:30:28.226 18:25:21 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:28.226 18:25:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:28.226 18:25:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:28.226 18:25:21 -- common/autotest_common.sh@10 -- # set +x 00:30:28.226 ************************************ 00:30:28.226 START TEST keyring_linux 00:30:28.226 ************************************ 00:30:28.226 18:25:21 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:28.226 * Looking for test storage... 00:30:28.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:28.226 18:25:21 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:28.226 18:25:21 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:803833e2-2ada-e911-906e-0017a4403562 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=803833e2-2ada-e911-906e-0017a4403562 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:28.226 18:25:21 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:28.226 18:25:21 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:28.226 18:25:21 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:28.226 18:25:21 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:28.226 18:25:21 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.226 18:25:21 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.226 18:25:21 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.226 18:25:21 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:28.227 18:25:21 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:28.227 18:25:21 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:28.227 18:25:21 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:28.227 18:25:21 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:28.227 18:25:21 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:28.227 18:25:21 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:28.227 18:25:21 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:28.227 18:25:21 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:28.227 18:25:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:28.227 18:25:21 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:28.227 18:25:21 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:28.227 18:25:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:28.227 18:25:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:28.227 18:25:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:28.227 18:25:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:28.485 /tmp/:spdk-test:key0 00:30:28.485 18:25:21 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:28.485 18:25:21 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:28.485 18:25:21 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:28.485 18:25:21 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:28.485 18:25:21 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:28.485 18:25:21 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:28.485 18:25:21 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:28.485 18:25:21 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:28.485 /tmp/:spdk-test:key1 00:30:28.485 18:25:21 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:28.485 18:25:21 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3605889 00:30:28.485 18:25:21 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3605889 00:30:28.485 18:25:21 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3605889 ']' 00:30:28.485 18:25:21 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:28.485 18:25:21 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:28.485 18:25:21 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:28.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:28.485 18:25:21 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:28.485 18:25:21 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:28.485 [2024-07-24 18:25:21.395907] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:30:28.485 [2024-07-24 18:25:21.395955] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3605889 ] 00:30:28.485 EAL: No free 2048 kB hugepages reported on node 1 00:30:28.485 [2024-07-24 18:25:21.450486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.485 [2024-07-24 18:25:21.531022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:30:29.418 18:25:22 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:29.418 [2024-07-24 18:25:22.206080] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:29.418 null0 00:30:29.418 [2024-07-24 18:25:22.238133] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:29.418 [2024-07-24 18:25:22.238469] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:29.418 18:25:22 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:29.418 615508829 00:30:29.418 18:25:22 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:29.418 942367681 00:30:29.418 18:25:22 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3605987 00:30:29.418 18:25:22 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3605987 /var/tmp/bperf.sock 00:30:29.418 18:25:22 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3605987 ']' 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:29.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:29.418 18:25:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:29.418 [2024-07-24 18:25:22.306799] Starting SPDK v24.09-pre git sha1 ac4b3e123 / DPDK 24.03.0 initialization... 00:30:29.418 [2024-07-24 18:25:22.306842] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3605987 ] 00:30:29.418 EAL: No free 2048 kB hugepages reported on node 1 00:30:29.418 [2024-07-24 18:25:22.359378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:29.418 [2024-07-24 18:25:22.439475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:30.352 18:25:23 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:30.352 18:25:23 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:30:30.352 18:25:23 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:30.352 18:25:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:30.352 18:25:23 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:30.352 18:25:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:30.611 18:25:23 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:30.611 18:25:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:30.611 [2024-07-24 18:25:23.686346] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:30.869 nvme0n1 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:30.869 18:25:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:30.869 18:25:23 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:30.869 18:25:23 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:30.869 18:25:23 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:30.869 18:25:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:31.126 18:25:24 keyring_linux -- keyring/linux.sh@25 -- # sn=615508829 00:30:31.126 18:25:24 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:31.126 18:25:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:31.126 18:25:24 keyring_linux -- keyring/linux.sh@26 -- # [[ 615508829 == \6\1\5\5\0\8\8\2\9 ]] 00:30:31.126 18:25:24 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 615508829 00:30:31.126 18:25:24 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:31.126 18:25:24 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:31.383 Running I/O for 1 seconds... 00:30:32.316 00:30:32.316 Latency(us) 00:30:32.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.316 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:32.316 nvme0n1 : 1.01 18446.56 72.06 0.00 0.00 6910.92 3292.40 9424.70 00:30:32.316 =================================================================================================================== 00:30:32.316 Total : 18446.56 72.06 0.00 0.00 6910.92 3292.40 9424.70 00:30:32.316 0 00:30:32.316 18:25:25 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:32.316 18:25:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:32.574 18:25:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:32.574 18:25:25 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.574 18:25:25 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:30:32.574 18:25:25 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.574 18:25:25 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:30:32.574 18:25:25 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:32.574 18:25:25 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:30:32.574 18:25:25 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:32.574 18:25:25 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.574 18:25:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:32.832 [2024-07-24 18:25:25.748473] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:32.832 [2024-07-24 18:25:25.749052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffc770 (107): Transport endpoint is not connected 00:30:32.832 [2024-07-24 18:25:25.750047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ffc770 (9): Bad file descriptor 00:30:32.832 [2024-07-24 18:25:25.751048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:32.832 [2024-07-24 18:25:25.751057] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:32.832 [2024-07-24 18:25:25.751063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:32.832 request: 00:30:32.832 { 00:30:32.832 "name": "nvme0", 00:30:32.832 "trtype": "tcp", 00:30:32.832 "traddr": "127.0.0.1", 00:30:32.832 "adrfam": "ipv4", 00:30:32.832 "trsvcid": "4420", 00:30:32.832 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:32.832 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:32.832 "prchk_reftag": false, 00:30:32.832 "prchk_guard": false, 00:30:32.832 "hdgst": false, 00:30:32.832 "ddgst": false, 00:30:32.832 "psk": ":spdk-test:key1", 00:30:32.832 "method": "bdev_nvme_attach_controller", 00:30:32.832 "req_id": 1 00:30:32.832 } 00:30:32.832 Got JSON-RPC error response 00:30:32.832 response: 00:30:32.832 { 00:30:32.832 "code": -5, 00:30:32.832 "message": "Input/output error" 00:30:32.832 } 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@33 -- # sn=615508829 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 615508829 00:30:32.832 1 links removed 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@33 -- # sn=942367681 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 942367681 00:30:32.832 1 links removed 00:30:32.832 18:25:25 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3605987 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3605987 ']' 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3605987 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3605987 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3605987' 00:30:32.832 killing process with pid 3605987 00:30:32.832 18:25:25 keyring_linux -- common/autotest_common.sh@969 -- # kill 3605987 00:30:32.832 Received shutdown signal, test time was about 1.000000 seconds 00:30:32.832 00:30:32.832 Latency(us) 00:30:32.832 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.832 =================================================================================================================== 00:30:32.833 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:32.833 18:25:25 keyring_linux -- common/autotest_common.sh@974 -- # wait 3605987 00:30:33.090 18:25:25 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3605889 00:30:33.090 18:25:25 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3605889 ']' 00:30:33.090 18:25:25 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3605889 00:30:33.090 18:25:25 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:30:33.090 18:25:25 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:33.090 18:25:25 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3605889 00:30:33.090 18:25:26 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:33.090 18:25:26 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:33.090 18:25:26 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3605889' 00:30:33.090 killing process with pid 3605889 00:30:33.090 18:25:26 keyring_linux -- common/autotest_common.sh@969 -- # kill 3605889 00:30:33.090 18:25:26 keyring_linux -- common/autotest_common.sh@974 -- # wait 3605889 00:30:33.348 00:30:33.348 real 0m5.168s 00:30:33.348 user 0m9.285s 00:30:33.348 sys 0m1.476s 00:30:33.348 18:25:26 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:33.348 18:25:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:33.348 ************************************ 00:30:33.348 END TEST keyring_linux 00:30:33.348 ************************************ 00:30:33.348 18:25:26 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:30:33.348 18:25:26 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:33.348 18:25:26 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:33.348 18:25:26 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:33.348 18:25:26 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:30:33.348 18:25:26 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:30:33.348 18:25:26 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:30:33.348 18:25:26 -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:33.348 18:25:26 -- common/autotest_common.sh@10 -- # set +x 00:30:33.348 18:25:26 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:30:33.348 18:25:26 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:33.348 18:25:26 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:33.348 18:25:26 -- common/autotest_common.sh@10 -- # set +x 00:30:37.579 INFO: APP EXITING 00:30:37.579 INFO: killing all VMs 00:30:37.579 INFO: killing vhost app 00:30:37.579 INFO: EXIT DONE 00:30:40.857 0000:5f:00.0 (8086 0a54): Already using the nvme driver 00:30:40.857 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:40.857 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:43.383 Cleaning 00:30:43.383 Removing: /var/run/dpdk/spdk0/config 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:43.383 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:43.383 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:43.383 Removing: /var/run/dpdk/spdk1/config 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:43.383 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:43.383 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:43.383 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:43.383 Removing: /var/run/dpdk/spdk2/config 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:43.383 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:43.383 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:43.383 Removing: /var/run/dpdk/spdk3/config 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:43.383 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:43.383 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:43.383 Removing: /var/run/dpdk/spdk4/config 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:43.383 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:43.383 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:43.383 Removing: /dev/shm/bdev_svc_trace.1 00:30:43.383 Removing: /dev/shm/nvmf_trace.0 00:30:43.383 Removing: /dev/shm/spdk_tgt_trace.pid3227057 00:30:43.383 Removing: /var/run/dpdk/spdk0 00:30:43.383 Removing: /var/run/dpdk/spdk1 00:30:43.383 Removing: /var/run/dpdk/spdk2 00:30:43.383 Removing: /var/run/dpdk/spdk3 00:30:43.383 Removing: /var/run/dpdk/spdk4 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3224571 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3225643 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3227057 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3227695 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3229027 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3229265 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3230236 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3230395 00:30:43.383 Removing: /var/run/dpdk/spdk_pid3230594 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3232336 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3233614 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3233916 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3234335 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3234702 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3234993 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3235249 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3235498 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3235765 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3236512 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3239503 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3239767 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3240038 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3240203 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3240541 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3240768 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3241259 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3241273 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3241589 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3241766 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3242024 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3242110 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3242595 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3242849 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3243134 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3246828 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3251238 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3261283 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3261832 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3266093 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3266343 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3270749 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3276982 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3279580 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3289998 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3298896 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3300550 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3301466 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3318187 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3322452 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3365793 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3371141 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3377216 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3383410 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3383419 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3384327 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3385065 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3385941 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3386572 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3386632 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3386859 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3386875 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3386919 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3387791 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3388700 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3389617 00:30:43.642 Removing: /var/run/dpdk/spdk_pid3390103 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3390112 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3390408 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3391577 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3392775 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3400974 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3426018 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3430471 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3432113 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3433954 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3434195 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3434415 00:30:43.643 Removing: /var/run/dpdk/spdk_pid3434549 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3435178 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3437023 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3438007 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3438509 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3440616 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3441309 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3441966 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3445986 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3456348 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3460388 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3466374 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3467681 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3469227 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3473498 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3477545 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3484930 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3484932 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3489429 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3489653 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3489880 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3490324 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3490344 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3494822 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3495395 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3499855 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3502994 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3508393 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3513724 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3522197 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3529251 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3529253 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3547241 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3548325 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3548955 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3549506 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3550472 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3551172 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3551746 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3552355 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3556603 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3556864 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3562890 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3563162 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3565388 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3573124 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3573138 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3578397 00:30:43.901 Removing: /var/run/dpdk/spdk_pid3580200 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3582109 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3583345 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3585349 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3586537 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3595658 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3596121 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3596726 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3599064 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3599532 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3599998 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3603815 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3603836 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3605349 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3605889 00:30:43.902 Removing: /var/run/dpdk/spdk_pid3605987 00:30:43.902 Clean 00:30:44.160 18:25:37 -- common/autotest_common.sh@1451 -- # return 0 00:30:44.160 18:25:37 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:30:44.160 18:25:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:44.160 18:25:37 -- common/autotest_common.sh@10 -- # set +x 00:30:44.160 18:25:37 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:30:44.160 18:25:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:44.160 18:25:37 -- common/autotest_common.sh@10 -- # set +x 00:30:44.160 18:25:37 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:44.160 18:25:37 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:44.160 18:25:37 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:44.160 18:25:37 -- spdk/autotest.sh@395 -- # hash lcov 00:30:44.160 18:25:37 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:44.160 18:25:37 -- spdk/autotest.sh@397 -- # hostname 00:30:44.160 18:25:37 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-05 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:44.418 geninfo: WARNING: invalid characters removed from testname! 00:31:06.344 18:25:56 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:06.344 18:25:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:08.248 18:26:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:09.624 18:26:02 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:11.525 18:26:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:13.426 18:26:06 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:14.801 18:26:07 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:15.060 18:26:07 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:15.060 18:26:07 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:15.060 18:26:07 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:15.060 18:26:07 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:15.060 18:26:07 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.060 18:26:07 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.061 18:26:07 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.061 18:26:07 -- paths/export.sh@5 -- $ export PATH 00:31:15.061 18:26:07 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:15.061 18:26:07 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:15.061 18:26:07 -- common/autobuild_common.sh@447 -- $ date +%s 00:31:15.061 18:26:07 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721838367.XXXXXX 00:31:15.061 18:26:07 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721838367.IjKF6f 00:31:15.061 18:26:07 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:31:15.061 18:26:07 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:31:15.061 18:26:07 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:15.061 18:26:07 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:15.061 18:26:07 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:15.061 18:26:07 -- common/autobuild_common.sh@463 -- $ get_config_params 00:31:15.061 18:26:07 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:31:15.061 18:26:07 -- common/autotest_common.sh@10 -- $ set +x 00:31:15.061 18:26:07 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:15.061 18:26:07 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:31:15.061 18:26:07 -- pm/common@17 -- $ local monitor 00:31:15.061 18:26:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.061 18:26:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.061 18:26:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.061 18:26:07 -- pm/common@21 -- $ date +%s 00:31:15.061 18:26:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.061 18:26:07 -- pm/common@21 -- $ date +%s 00:31:15.061 18:26:07 -- pm/common@25 -- $ sleep 1 00:31:15.061 18:26:07 -- pm/common@21 -- $ date +%s 00:31:15.061 18:26:07 -- pm/common@21 -- $ date +%s 00:31:15.061 18:26:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838367 00:31:15.061 18:26:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838367 00:31:15.061 18:26:07 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838367 00:31:15.061 18:26:07 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721838367 00:31:15.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838367_collect-vmstat.pm.log 00:31:15.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838367_collect-cpu-load.pm.log 00:31:15.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838367_collect-cpu-temp.pm.log 00:31:15.061 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721838367_collect-bmc-pm.bmc.pm.log 00:31:15.998 18:26:08 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:31:15.998 18:26:08 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:15.998 18:26:08 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:15.998 18:26:08 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:15.998 18:26:08 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:15.998 18:26:08 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:15.998 18:26:08 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:15.998 18:26:08 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:15.998 18:26:08 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:15.998 18:26:08 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:15.998 18:26:08 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:15.998 18:26:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:15.998 18:26:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:15.998 18:26:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.998 18:26:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:15.998 18:26:08 -- pm/common@44 -- $ pid=3616021 00:31:15.998 18:26:08 -- pm/common@50 -- $ kill -TERM 3616021 00:31:15.998 18:26:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.998 18:26:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:15.998 18:26:08 -- pm/common@44 -- $ pid=3616023 00:31:15.998 18:26:08 -- pm/common@50 -- $ kill -TERM 3616023 00:31:15.998 18:26:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.998 18:26:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:15.998 18:26:08 -- pm/common@44 -- $ pid=3616024 00:31:15.998 18:26:08 -- pm/common@50 -- $ kill -TERM 3616024 00:31:15.998 18:26:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:15.998 18:26:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:15.998 18:26:08 -- pm/common@44 -- $ pid=3616047 00:31:15.998 18:26:08 -- pm/common@50 -- $ sudo -E kill -TERM 3616047 00:31:15.998 + [[ -n 3121187 ]] 00:31:15.998 + sudo kill 3121187 00:31:16.007 [Pipeline] } 00:31:16.026 [Pipeline] // stage 00:31:16.031 [Pipeline] } 00:31:16.049 [Pipeline] // timeout 00:31:16.054 [Pipeline] } 00:31:16.068 [Pipeline] // catchError 00:31:16.073 [Pipeline] } 00:31:16.089 [Pipeline] // wrap 00:31:16.094 [Pipeline] } 00:31:16.110 [Pipeline] // catchError 00:31:16.121 [Pipeline] stage 00:31:16.123 [Pipeline] { (Epilogue) 00:31:16.138 [Pipeline] catchError 00:31:16.141 [Pipeline] { 00:31:16.157 [Pipeline] echo 00:31:16.159 Cleanup processes 00:31:16.165 [Pipeline] sh 00:31:16.467 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:16.467 3616144 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:16.467 3616419 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:16.511 [Pipeline] sh 00:31:16.793 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:16.793 ++ grep -v 'sudo pgrep' 00:31:16.793 ++ awk '{print $1}' 00:31:16.793 + sudo kill -9 3616144 00:31:16.805 [Pipeline] sh 00:31:17.085 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:27.063 [Pipeline] sh 00:31:27.341 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:27.341 Artifacts sizes are good 00:31:27.355 [Pipeline] archiveArtifacts 00:31:27.361 Archiving artifacts 00:31:27.501 [Pipeline] sh 00:31:27.780 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:27.799 [Pipeline] cleanWs 00:31:27.824 [WS-CLEANUP] Deleting project workspace... 00:31:27.824 [WS-CLEANUP] Deferred wipeout is used... 00:31:27.831 [WS-CLEANUP] done 00:31:27.833 [Pipeline] } 00:31:27.854 [Pipeline] // catchError 00:31:27.867 [Pipeline] sh 00:31:28.148 + logger -p user.info -t JENKINS-CI 00:31:28.157 [Pipeline] } 00:31:28.171 [Pipeline] // stage 00:31:28.177 [Pipeline] } 00:31:28.192 [Pipeline] // node 00:31:28.197 [Pipeline] End of Pipeline 00:31:28.222 Finished: SUCCESS